May 15 11:53:25.832820 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 11:53:25.832844 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu May 15 10:40:40 -00 2025 May 15 11:53:25.832854 kernel: KASLR enabled May 15 11:53:25.832860 kernel: efi: EFI v2.7 by EDK II May 15 11:53:25.832866 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 May 15 11:53:25.832872 kernel: random: crng init done May 15 11:53:25.832879 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 15 11:53:25.832885 kernel: secureboot: Secure boot enabled May 15 11:53:25.832891 kernel: ACPI: Early table checksum verification disabled May 15 11:53:25.832898 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) May 15 11:53:25.832904 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 11:53:25.832911 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832917 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832923 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832930 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832938 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832944 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832951 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832957 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832964 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 11:53:25.832970 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 11:53:25.832976 kernel: ACPI: Use ACPI SPCR as default console: Yes May 15 11:53:25.832983 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 11:53:25.832989 kernel: NODE_DATA(0) allocated [mem 0xdc737dc0-0xdc73efff] May 15 11:53:25.832996 kernel: Zone ranges: May 15 11:53:25.833003 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 11:53:25.833010 kernel: DMA32 empty May 15 11:53:25.833016 kernel: Normal empty May 15 11:53:25.833022 kernel: Device empty May 15 11:53:25.833028 kernel: Movable zone start for each node May 15 11:53:25.833035 kernel: Early memory node ranges May 15 11:53:25.833041 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] May 15 11:53:25.833047 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] May 15 11:53:25.833054 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] May 15 11:53:25.833060 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] May 15 11:53:25.833067 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] May 15 11:53:25.833073 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] May 15 11:53:25.833081 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] May 15 11:53:25.833087 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 15 11:53:25.833101 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 11:53:25.833112 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 11:53:25.833118 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 11:53:25.833125 kernel: psci: probing for conduit method from ACPI. May 15 11:53:25.833132 kernel: psci: PSCIv1.1 detected in firmware. May 15 11:53:25.833140 kernel: psci: Using standard PSCI v0.2 function IDs May 15 11:53:25.833146 kernel: psci: Trusted OS migration not required May 15 11:53:25.833153 kernel: psci: SMC Calling Convention v1.1 May 15 11:53:25.833159 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 11:53:25.833167 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 15 11:53:25.833173 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 15 11:53:25.833180 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 11:53:25.833187 kernel: Detected PIPT I-cache on CPU0 May 15 11:53:25.833193 kernel: CPU features: detected: GIC system register CPU interface May 15 11:53:25.833202 kernel: CPU features: detected: Spectre-v4 May 15 11:53:25.833209 kernel: CPU features: detected: Spectre-BHB May 15 11:53:25.833216 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 11:53:25.833223 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 11:53:25.833230 kernel: CPU features: detected: ARM erratum 1418040 May 15 11:53:25.833237 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 11:53:25.833244 kernel: alternatives: applying boot alternatives May 15 11:53:25.833251 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bf509bd8a8efc068ea7b7cbdc99b42bf1cbaf8a0ba93f67c8f1cf632dc3496d8 May 15 11:53:25.833258 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 11:53:25.833266 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 11:53:25.833272 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 11:53:25.833280 kernel: Fallback order for Node 0: 0 May 15 11:53:25.833287 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 15 11:53:25.833294 kernel: Policy zone: DMA May 15 11:53:25.833300 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 11:53:25.833307 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 15 11:53:25.833314 kernel: software IO TLB: area num 4. May 15 11:53:25.833320 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 15 11:53:25.833327 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) May 15 11:53:25.833334 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 11:53:25.833340 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 11:53:25.833347 kernel: rcu: RCU event tracing is enabled. May 15 11:53:25.833356 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 11:53:25.833364 kernel: Trampoline variant of Tasks RCU enabled. May 15 11:53:25.833371 kernel: Tracing variant of Tasks RCU enabled. May 15 11:53:25.833378 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 11:53:25.833385 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 11:53:25.833392 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 11:53:25.833399 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 11:53:25.833405 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 11:53:25.833412 kernel: GICv3: 256 SPIs implemented May 15 11:53:25.833419 kernel: GICv3: 0 Extended SPIs implemented May 15 11:53:25.833426 kernel: Root IRQ handler: gic_handle_irq May 15 11:53:25.833432 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 11:53:25.833440 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 15 11:53:25.833447 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 11:53:25.833454 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 11:53:25.833460 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 15 11:53:25.833467 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 15 11:53:25.833474 kernel: GICv3: using LPI property table @0x0000000040100000 May 15 11:53:25.833481 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 15 11:53:25.833502 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 11:53:25.833512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 11:53:25.833519 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 11:53:25.833526 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 11:53:25.833533 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 11:53:25.833541 kernel: arm-pv: using stolen time PV May 15 11:53:25.833548 kernel: Console: colour dummy device 80x25 May 15 11:53:25.833555 kernel: ACPI: Core revision 20240827 May 15 11:53:25.833562 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 11:53:25.833569 kernel: pid_max: default: 32768 minimum: 301 May 15 11:53:25.833576 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 11:53:25.833582 kernel: landlock: Up and running. May 15 11:53:25.833589 kernel: SELinux: Initializing. May 15 11:53:25.833596 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 11:53:25.833604 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 11:53:25.833611 kernel: rcu: Hierarchical SRCU implementation. May 15 11:53:25.833618 kernel: rcu: Max phase no-delay instances is 400. May 15 11:53:25.833625 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 11:53:25.833632 kernel: Remapping and enabling EFI services. May 15 11:53:25.833639 kernel: smp: Bringing up secondary CPUs ... May 15 11:53:25.833645 kernel: Detected PIPT I-cache on CPU1 May 15 11:53:25.833652 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 11:53:25.833660 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 15 11:53:25.833668 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 11:53:25.833680 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 11:53:25.833687 kernel: Detected PIPT I-cache on CPU2 May 15 11:53:25.833696 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 11:53:25.833703 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 15 11:53:25.833710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 11:53:25.833717 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 11:53:25.833725 kernel: Detected PIPT I-cache on CPU3 May 15 11:53:25.833732 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 11:53:25.833741 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 15 11:53:25.833748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 11:53:25.833755 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 11:53:25.833762 kernel: smp: Brought up 1 node, 4 CPUs May 15 11:53:25.833769 kernel: SMP: Total of 4 processors activated. May 15 11:53:25.833777 kernel: CPU: All CPU(s) started at EL1 May 15 11:53:25.833784 kernel: CPU features: detected: 32-bit EL0 Support May 15 11:53:25.833791 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 11:53:25.833798 kernel: CPU features: detected: Common not Private translations May 15 11:53:25.833807 kernel: CPU features: detected: CRC32 instructions May 15 11:53:25.833814 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 11:53:25.833822 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 11:53:25.833829 kernel: CPU features: detected: LSE atomic instructions May 15 11:53:25.833836 kernel: CPU features: detected: Privileged Access Never May 15 11:53:25.833843 kernel: CPU features: detected: RAS Extension Support May 15 11:53:25.833851 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 11:53:25.833858 kernel: alternatives: applying system-wide alternatives May 15 11:53:25.833865 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 15 11:53:25.833874 kernel: Memory: 2438884K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 127636K reserved, 0K cma-reserved) May 15 11:53:25.833881 kernel: devtmpfs: initialized May 15 11:53:25.833888 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 11:53:25.833895 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 11:53:25.833903 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 11:53:25.833910 kernel: 0 pages in range for non-PLT usage May 15 11:53:25.833917 kernel: 508544 pages in range for PLT usage May 15 11:53:25.833924 kernel: pinctrl core: initialized pinctrl subsystem May 15 11:53:25.833931 kernel: SMBIOS 3.0.0 present. May 15 11:53:25.833940 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 11:53:25.833947 kernel: DMI: Memory slots populated: 1/1 May 15 11:53:25.833954 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 11:53:25.833962 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 11:53:25.833969 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 11:53:25.833976 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 11:53:25.833984 kernel: audit: initializing netlink subsys (disabled) May 15 11:53:25.833991 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 May 15 11:53:25.833999 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 11:53:25.834007 kernel: cpuidle: using governor menu May 15 11:53:25.834014 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 11:53:25.834021 kernel: ASID allocator initialised with 32768 entries May 15 11:53:25.834028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 11:53:25.834035 kernel: Serial: AMBA PL011 UART driver May 15 11:53:25.834043 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 11:53:25.834050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 11:53:25.834056 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 11:53:25.834065 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 11:53:25.834071 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 11:53:25.834078 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 11:53:25.834086 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 11:53:25.834097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 11:53:25.834106 kernel: ACPI: Added _OSI(Module Device) May 15 11:53:25.834112 kernel: ACPI: Added _OSI(Processor Device) May 15 11:53:25.834120 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 11:53:25.834127 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 11:53:25.834134 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 11:53:25.834142 kernel: ACPI: Interpreter enabled May 15 11:53:25.834150 kernel: ACPI: Using GIC for interrupt routing May 15 11:53:25.834157 kernel: ACPI: MCFG table detected, 1 entries May 15 11:53:25.834164 kernel: ACPI: CPU0 has been hot-added May 15 11:53:25.834170 kernel: ACPI: CPU1 has been hot-added May 15 11:53:25.834177 kernel: ACPI: CPU2 has been hot-added May 15 11:53:25.834184 kernel: ACPI: CPU3 has been hot-added May 15 11:53:25.834191 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 11:53:25.834198 kernel: printk: legacy console [ttyAMA0] enabled May 15 11:53:25.834206 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 11:53:25.834341 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 11:53:25.834409 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 11:53:25.834471 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 11:53:25.834550 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 11:53:25.834611 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 11:53:25.834620 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 11:53:25.834630 kernel: PCI host bridge to bus 0000:00 May 15 11:53:25.834701 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 11:53:25.834763 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 11:53:25.834818 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 11:53:25.834888 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 11:53:25.834971 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 15 11:53:25.835041 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 15 11:53:25.835119 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 15 11:53:25.835186 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 15 11:53:25.835257 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 15 11:53:25.835319 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 15 11:53:25.835382 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 15 11:53:25.835446 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 15 11:53:25.835581 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 11:53:25.835645 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 11:53:25.835704 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 11:53:25.835714 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 11:53:25.835722 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 11:53:25.835730 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 11:53:25.835737 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 11:53:25.835744 kernel: iommu: Default domain type: Translated May 15 11:53:25.835754 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 11:53:25.835761 kernel: efivars: Registered efivars operations May 15 11:53:25.835768 kernel: vgaarb: loaded May 15 11:53:25.835776 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 11:53:25.835783 kernel: VFS: Disk quotas dquot_6.6.0 May 15 11:53:25.835790 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 11:53:25.835797 kernel: pnp: PnP ACPI init May 15 11:53:25.835871 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 11:53:25.835882 kernel: pnp: PnP ACPI: found 1 devices May 15 11:53:25.835891 kernel: NET: Registered PF_INET protocol family May 15 11:53:25.835898 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 11:53:25.835908 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 11:53:25.835918 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 11:53:25.835928 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 11:53:25.835935 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 11:53:25.835943 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 11:53:25.835950 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 11:53:25.835957 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 11:53:25.835966 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 11:53:25.835973 kernel: PCI: CLS 0 bytes, default 64 May 15 11:53:25.835981 kernel: kvm [1]: HYP mode not available May 15 11:53:25.835988 kernel: Initialise system trusted keyrings May 15 11:53:25.835995 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 11:53:25.836002 kernel: Key type asymmetric registered May 15 11:53:25.836010 kernel: Asymmetric key parser 'x509' registered May 15 11:53:25.836017 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 11:53:25.836024 kernel: io scheduler mq-deadline registered May 15 11:53:25.836033 kernel: io scheduler kyber registered May 15 11:53:25.836040 kernel: io scheduler bfq registered May 15 11:53:25.836048 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 11:53:25.836055 kernel: ACPI: button: Power Button [PWRB] May 15 11:53:25.836062 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 11:53:25.836137 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 11:53:25.836148 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 11:53:25.836155 kernel: thunder_xcv, ver 1.0 May 15 11:53:25.836163 kernel: thunder_bgx, ver 1.0 May 15 11:53:25.836172 kernel: nicpf, ver 1.0 May 15 11:53:25.836180 kernel: nicvf, ver 1.0 May 15 11:53:25.836254 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 11:53:25.836316 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T11:53:25 UTC (1747310005) May 15 11:53:25.836325 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 11:53:25.836333 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 15 11:53:25.836340 kernel: watchdog: NMI not fully supported May 15 11:53:25.836347 kernel: watchdog: Hard watchdog permanently disabled May 15 11:53:25.836356 kernel: NET: Registered PF_INET6 protocol family May 15 11:53:25.836364 kernel: Segment Routing with IPv6 May 15 11:53:25.836371 kernel: In-situ OAM (IOAM) with IPv6 May 15 11:53:25.836378 kernel: NET: Registered PF_PACKET protocol family May 15 11:53:25.836385 kernel: Key type dns_resolver registered May 15 11:53:25.836392 kernel: registered taskstats version 1 May 15 11:53:25.836399 kernel: Loading compiled-in X.509 certificates May 15 11:53:25.836406 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 6c8c7c40bf8565fead88558d446d0157ca21f08d' May 15 11:53:25.836413 kernel: Demotion targets for Node 0: null May 15 11:53:25.836422 kernel: Key type .fscrypt registered May 15 11:53:25.836429 kernel: Key type fscrypt-provisioning registered May 15 11:53:25.836436 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 11:53:25.836444 kernel: ima: Allocated hash algorithm: sha1 May 15 11:53:25.836451 kernel: ima: No architecture policies found May 15 11:53:25.836458 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 11:53:25.836465 kernel: clk: Disabling unused clocks May 15 11:53:25.836473 kernel: PM: genpd: Disabling unused power domains May 15 11:53:25.836480 kernel: Warning: unable to open an initial console. May 15 11:53:25.836498 kernel: Freeing unused kernel memory: 39424K May 15 11:53:25.836506 kernel: Run /init as init process May 15 11:53:25.836513 kernel: with arguments: May 15 11:53:25.836520 kernel: /init May 15 11:53:25.836527 kernel: with environment: May 15 11:53:25.836534 kernel: HOME=/ May 15 11:53:25.836541 kernel: TERM=linux May 15 11:53:25.836549 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 11:53:25.836557 systemd[1]: Successfully made /usr/ read-only. May 15 11:53:25.836569 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 11:53:25.836578 systemd[1]: Detected virtualization kvm. May 15 11:53:25.836585 systemd[1]: Detected architecture arm64. May 15 11:53:25.836593 systemd[1]: Running in initrd. May 15 11:53:25.836600 systemd[1]: No hostname configured, using default hostname. May 15 11:53:25.836608 systemd[1]: Hostname set to . May 15 11:53:25.836616 systemd[1]: Initializing machine ID from VM UUID. May 15 11:53:25.836625 systemd[1]: Queued start job for default target initrd.target. May 15 11:53:25.836632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 11:53:25.836640 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 11:53:25.836649 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 11:53:25.836656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 11:53:25.836664 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 11:53:25.836673 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 11:53:25.836683 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 11:53:25.836691 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 11:53:25.836699 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 11:53:25.836706 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 11:53:25.836714 systemd[1]: Reached target paths.target - Path Units. May 15 11:53:25.836722 systemd[1]: Reached target slices.target - Slice Units. May 15 11:53:25.836729 systemd[1]: Reached target swap.target - Swaps. May 15 11:53:25.836737 systemd[1]: Reached target timers.target - Timer Units. May 15 11:53:25.836747 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 11:53:25.836755 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 11:53:25.836763 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 11:53:25.836771 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 11:53:25.836778 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 11:53:25.836786 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 11:53:25.836794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 11:53:25.836802 systemd[1]: Reached target sockets.target - Socket Units. May 15 11:53:25.836811 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 11:53:25.836819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 11:53:25.836826 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 11:53:25.836835 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 11:53:25.836842 systemd[1]: Starting systemd-fsck-usr.service... May 15 11:53:25.836850 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 11:53:25.836858 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 11:53:25.836866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:53:25.836873 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 11:53:25.836883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 11:53:25.836891 systemd[1]: Finished systemd-fsck-usr.service. May 15 11:53:25.836899 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 11:53:25.836924 systemd-journald[244]: Collecting audit messages is disabled. May 15 11:53:25.836946 systemd-journald[244]: Journal started May 15 11:53:25.836965 systemd-journald[244]: Runtime Journal (/run/log/journal/d2e42ef6a2064bee8e1deb7ae94614e7) is 6M, max 48.5M, 42.4M free. May 15 11:53:25.825875 systemd-modules-load[247]: Inserted module 'overlay' May 15 11:53:25.846214 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:53:25.846238 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 11:53:25.848901 systemd-modules-load[247]: Inserted module 'br_netfilter' May 15 11:53:25.850711 kernel: Bridge firewalling registered May 15 11:53:25.850733 systemd[1]: Started systemd-journald.service - Journal Service. May 15 11:53:25.851956 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 11:53:25.854428 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 11:53:25.858599 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 11:53:25.860659 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 11:53:25.863089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 11:53:25.871306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 11:53:25.880539 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 11:53:25.882004 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 11:53:25.882228 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 11:53:25.885823 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 11:53:25.887889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 11:53:25.891313 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 11:53:25.893687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 11:53:25.914596 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bf509bd8a8efc068ea7b7cbdc99b42bf1cbaf8a0ba93f67c8f1cf632dc3496d8 May 15 11:53:25.930784 systemd-resolved[291]: Positive Trust Anchors: May 15 11:53:25.930803 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 11:53:25.930842 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 11:53:25.935953 systemd-resolved[291]: Defaulting to hostname 'linux'. May 15 11:53:25.937163 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 11:53:25.940780 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 11:53:25.997530 kernel: SCSI subsystem initialized May 15 11:53:26.002503 kernel: Loading iSCSI transport class v2.0-870. May 15 11:53:26.010527 kernel: iscsi: registered transport (tcp) May 15 11:53:26.024510 kernel: iscsi: registered transport (qla4xxx) May 15 11:53:26.024540 kernel: QLogic iSCSI HBA Driver May 15 11:53:26.046029 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 11:53:26.068023 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 11:53:26.070418 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 11:53:26.118844 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 11:53:26.121577 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 11:53:26.193529 kernel: raid6: neonx8 gen() 15793 MB/s May 15 11:53:26.210516 kernel: raid6: neonx4 gen() 15795 MB/s May 15 11:53:26.227515 kernel: raid6: neonx2 gen() 13179 MB/s May 15 11:53:26.244511 kernel: raid6: neonx1 gen() 10438 MB/s May 15 11:53:26.261510 kernel: raid6: int64x8 gen() 6895 MB/s May 15 11:53:26.278508 kernel: raid6: int64x4 gen() 7346 MB/s May 15 11:53:26.295513 kernel: raid6: int64x2 gen() 6099 MB/s May 15 11:53:26.312774 kernel: raid6: int64x1 gen() 5044 MB/s May 15 11:53:26.312791 kernel: raid6: using algorithm neonx4 gen() 15795 MB/s May 15 11:53:26.330707 kernel: raid6: .... xor() 12353 MB/s, rmw enabled May 15 11:53:26.330734 kernel: raid6: using neon recovery algorithm May 15 11:53:26.338624 kernel: xor: measuring software checksum speed May 15 11:53:26.339517 kernel: 8regs : 1294 MB/sec May 15 11:53:26.340733 kernel: 32regs : 18544 MB/sec May 15 11:53:26.340751 kernel: arm64_neon : 26356 MB/sec May 15 11:53:26.340761 kernel: xor: using function: arm64_neon (26356 MB/sec) May 15 11:53:26.418515 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 11:53:26.429070 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 11:53:26.432078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 11:53:26.471215 systemd-udevd[500]: Using default interface naming scheme 'v255'. May 15 11:53:26.475476 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 11:53:26.477591 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 11:53:26.504504 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 15 11:53:26.535026 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 11:53:26.537966 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 11:53:26.600955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 11:53:26.603236 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 11:53:26.649885 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 11:53:26.660465 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 11:53:26.662022 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 11:53:26.662042 kernel: GPT:9289727 != 19775487 May 15 11:53:26.662052 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 11:53:26.662061 kernel: GPT:9289727 != 19775487 May 15 11:53:26.662069 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 11:53:26.662078 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 11:53:26.651249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 11:53:26.651367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:53:26.656872 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:53:26.660682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:53:26.690638 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 11:53:26.691987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:53:26.698968 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 11:53:26.707783 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 11:53:26.718337 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 11:53:26.719676 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 11:53:26.728271 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 11:53:26.729610 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 11:53:26.731690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 11:53:26.733746 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 11:53:26.736304 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 11:53:26.738128 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 11:53:26.761892 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 11:53:26.774717 disk-uuid[591]: Primary Header is updated. May 15 11:53:26.774717 disk-uuid[591]: Secondary Entries is updated. May 15 11:53:26.774717 disk-uuid[591]: Secondary Header is updated. May 15 11:53:26.779515 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 11:53:27.790519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 11:53:27.792926 disk-uuid[599]: The operation has completed successfully. May 15 11:53:27.815665 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 11:53:27.815764 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 11:53:27.842300 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 11:53:27.858224 sh[612]: Success May 15 11:53:27.872565 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 11:53:27.872612 kernel: device-mapper: uevent: version 1.0.3 May 15 11:53:27.873823 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 11:53:27.885507 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 15 11:53:27.915361 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 11:53:27.918091 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 11:53:27.934269 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 11:53:27.941221 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 11:53:27.941258 kernel: BTRFS: device fsid 0a747134-9b18-4ef1-ad11-5025524c86c8 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (624) May 15 11:53:27.943581 kernel: BTRFS info (device dm-0): first mount of filesystem 0a747134-9b18-4ef1-ad11-5025524c86c8 May 15 11:53:27.943600 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 11:53:27.943619 kernel: BTRFS info (device dm-0): using free-space-tree May 15 11:53:27.947291 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 11:53:27.948556 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 11:53:27.949887 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 11:53:27.950742 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 11:53:27.953568 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 11:53:27.973516 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (654) May 15 11:53:27.976536 kernel: BTRFS info (device vda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:53:27.976578 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 11:53:27.976589 kernel: BTRFS info (device vda6): using free-space-tree May 15 11:53:27.984580 kernel: BTRFS info (device vda6): last unmount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:53:27.985166 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 11:53:27.987057 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 11:53:28.058511 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 11:53:28.061339 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 11:53:28.112465 systemd-networkd[800]: lo: Link UP May 15 11:53:28.112478 systemd-networkd[800]: lo: Gained carrier May 15 11:53:28.113238 systemd-networkd[800]: Enumeration completed May 15 11:53:28.114018 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:53:28.114022 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 11:53:28.114681 systemd-networkd[800]: eth0: Link UP May 15 11:53:28.114684 systemd-networkd[800]: eth0: Gained carrier May 15 11:53:28.114692 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:53:28.114924 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 11:53:28.116017 systemd[1]: Reached target network.target - Network. May 15 11:53:28.132316 ignition[700]: Ignition 2.21.0 May 15 11:53:28.132330 ignition[700]: Stage: fetch-offline May 15 11:53:28.132545 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 11:53:28.132359 ignition[700]: no configs at "/usr/lib/ignition/base.d" May 15 11:53:28.132367 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 11:53:28.132568 ignition[700]: parsed url from cmdline: "" May 15 11:53:28.132571 ignition[700]: no config URL provided May 15 11:53:28.132575 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" May 15 11:53:28.132582 ignition[700]: no config at "/usr/lib/ignition/user.ign" May 15 11:53:28.132601 ignition[700]: op(1): [started] loading QEMU firmware config module May 15 11:53:28.132605 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 11:53:28.141447 ignition[700]: op(1): [finished] loading QEMU firmware config module May 15 11:53:28.185340 ignition[700]: parsing config with SHA512: 17b962ba253d51ca705230896a8ddf3a3587dc586dc875b524e810cb5e8cc20faa6b757d5647f68ce9757295e53675a7770d7f1ea98fdaf9ae16885512a2e07a May 15 11:53:28.189917 unknown[700]: fetched base config from "system" May 15 11:53:28.189930 unknown[700]: fetched user config from "qemu" May 15 11:53:28.190258 ignition[700]: fetch-offline: fetch-offline passed May 15 11:53:28.192858 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 11:53:28.190308 ignition[700]: Ignition finished successfully May 15 11:53:28.194107 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 11:53:28.194909 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 11:53:28.224791 ignition[810]: Ignition 2.21.0 May 15 11:53:28.224807 ignition[810]: Stage: kargs May 15 11:53:28.224942 ignition[810]: no configs at "/usr/lib/ignition/base.d" May 15 11:53:28.224950 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 11:53:28.227282 ignition[810]: kargs: kargs passed May 15 11:53:28.227343 ignition[810]: Ignition finished successfully May 15 11:53:28.229958 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 11:53:28.232277 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 11:53:28.272046 ignition[818]: Ignition 2.21.0 May 15 11:53:28.272062 ignition[818]: Stage: disks May 15 11:53:28.272210 ignition[818]: no configs at "/usr/lib/ignition/base.d" May 15 11:53:28.272220 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 11:53:28.273538 ignition[818]: disks: disks passed May 15 11:53:28.275268 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 11:53:28.273587 ignition[818]: Ignition finished successfully May 15 11:53:28.276812 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 11:53:28.278195 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 11:53:28.280190 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 11:53:28.281686 systemd[1]: Reached target sysinit.target - System Initialization. May 15 11:53:28.283480 systemd[1]: Reached target basic.target - Basic System. May 15 11:53:28.286257 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 11:53:28.315800 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 11:53:28.319885 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 11:53:28.322615 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 11:53:28.389519 kernel: EXT4-fs (vda9): mounted filesystem 7753583f-75f7-43aa-89cb-b5e5a7f28ed5 r/w with ordered data mode. Quota mode: none. May 15 11:53:28.389754 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 11:53:28.390961 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 11:53:28.393536 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 11:53:28.395066 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 11:53:28.396066 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 11:53:28.396128 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 11:53:28.396150 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 11:53:28.405008 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 11:53:28.407452 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 11:53:28.410471 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (836) May 15 11:53:28.412773 kernel: BTRFS info (device vda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:53:28.412802 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 11:53:28.412813 kernel: BTRFS info (device vda6): using free-space-tree May 15 11:53:28.420668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 11:53:28.458322 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory May 15 11:53:28.461306 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory May 15 11:53:28.464123 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory May 15 11:53:28.467442 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory May 15 11:53:28.547300 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 11:53:28.549430 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 11:53:28.551103 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 11:53:28.568680 kernel: BTRFS info (device vda6): last unmount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:53:28.593549 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 11:53:28.595594 ignition[951]: INFO : Ignition 2.21.0 May 15 11:53:28.595594 ignition[951]: INFO : Stage: mount May 15 11:53:28.595594 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 11:53:28.595594 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 11:53:28.605585 ignition[951]: INFO : mount: mount passed May 15 11:53:28.605585 ignition[951]: INFO : Ignition finished successfully May 15 11:53:28.598934 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 11:53:28.606567 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 11:53:28.940381 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 11:53:28.941893 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 11:53:28.967523 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (963) May 15 11:53:28.971134 kernel: BTRFS info (device vda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:53:28.971166 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 11:53:28.971177 kernel: BTRFS info (device vda6): using free-space-tree May 15 11:53:28.975931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 11:53:29.011083 ignition[981]: INFO : Ignition 2.21.0 May 15 11:53:29.011083 ignition[981]: INFO : Stage: files May 15 11:53:29.012807 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 11:53:29.012807 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 11:53:29.012807 ignition[981]: DEBUG : files: compiled without relabeling support, skipping May 15 11:53:29.016327 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 11:53:29.016327 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 11:53:29.016327 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 11:53:29.016327 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 11:53:29.016327 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 11:53:29.015810 unknown[981]: wrote ssh authorized keys file for user: core May 15 11:53:29.023891 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 11:53:29.023891 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 15 11:53:29.166413 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 11:53:29.328810 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 11:53:29.328810 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 11:53:29.332735 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 11:53:29.351527 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 11:53:29.351527 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 15 11:53:29.388658 systemd-networkd[800]: eth0: Gained IPv6LL May 15 11:53:29.725445 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 11:53:30.324715 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 11:53:30.324715 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 11:53:30.328511 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 11:53:30.328511 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 11:53:30.328511 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 11:53:30.328511 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 11:53:30.328511 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 11:53:30.328511 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 11:53:30.328511 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 11:53:30.328511 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 15 11:53:30.346190 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 11:53:30.350255 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 11:53:30.351778 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 15 11:53:30.351778 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 15 11:53:30.351778 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 15 11:53:30.351778 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 11:53:30.351778 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 11:53:30.351778 ignition[981]: INFO : files: files passed May 15 11:53:30.351778 ignition[981]: INFO : Ignition finished successfully May 15 11:53:30.352432 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 11:53:30.355343 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 11:53:30.358439 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 11:53:30.370568 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 11:53:30.371679 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 11:53:30.373911 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory May 15 11:53:30.375257 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 11:53:30.375257 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 11:53:30.378179 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 11:53:30.377661 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 11:53:30.379414 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 11:53:30.383648 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 11:53:30.413426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 11:53:30.414404 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 11:53:30.415825 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 11:53:30.417578 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 11:53:30.419394 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 11:53:30.420217 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 11:53:30.441540 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 11:53:30.444595 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 11:53:30.466764 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 11:53:30.468899 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 11:53:30.470927 systemd[1]: Stopped target timers.target - Timer Units. May 15 11:53:30.471793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 11:53:30.471920 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 11:53:30.474941 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 11:53:30.477315 systemd[1]: Stopped target basic.target - Basic System. May 15 11:53:30.479169 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 11:53:30.480592 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 11:53:30.483525 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 11:53:30.485646 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 11:53:30.488000 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 11:53:30.489600 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 11:53:30.492686 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 11:53:30.494577 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 11:53:30.497128 systemd[1]: Stopped target swap.target - Swaps. May 15 11:53:30.498294 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 11:53:30.498417 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 11:53:30.500972 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 11:53:30.503029 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 11:53:30.505133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 11:53:30.509563 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 11:53:30.510716 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 11:53:30.510837 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 11:53:30.513275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 11:53:30.513388 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 11:53:30.515372 systemd[1]: Stopped target paths.target - Path Units. May 15 11:53:30.516773 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 11:53:30.521570 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 11:53:30.522699 systemd[1]: Stopped target slices.target - Slice Units. May 15 11:53:30.524450 systemd[1]: Stopped target sockets.target - Socket Units. May 15 11:53:30.525831 systemd[1]: iscsid.socket: Deactivated successfully. May 15 11:53:30.525921 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 11:53:30.527291 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 11:53:30.527366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 11:53:30.528673 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 11:53:30.528786 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 11:53:30.530327 systemd[1]: ignition-files.service: Deactivated successfully. May 15 11:53:30.530432 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 11:53:30.532550 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 11:53:30.534194 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 11:53:30.534328 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 11:53:30.536970 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 11:53:30.538316 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 11:53:30.538439 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 11:53:30.540146 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 11:53:30.540246 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 11:53:30.545185 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 11:53:30.545654 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 11:53:30.553333 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 11:53:30.560866 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 11:53:30.560977 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 11:53:30.564541 ignition[1035]: INFO : Ignition 2.21.0 May 15 11:53:30.564541 ignition[1035]: INFO : Stage: umount May 15 11:53:30.564541 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 11:53:30.564541 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 11:53:30.568552 ignition[1035]: INFO : umount: umount passed May 15 11:53:30.568552 ignition[1035]: INFO : Ignition finished successfully May 15 11:53:30.568518 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 11:53:30.568637 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 11:53:30.570105 systemd[1]: Stopped target network.target - Network. May 15 11:53:30.572238 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 11:53:30.572304 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 11:53:30.573278 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 11:53:30.573320 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 11:53:30.574821 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 11:53:30.574865 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 11:53:30.576319 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 11:53:30.576359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 11:53:30.577743 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 11:53:30.577788 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 11:53:30.579419 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 11:53:30.581103 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 11:53:30.587093 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 11:53:30.587196 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 11:53:30.592458 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 11:53:30.594374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 11:53:30.594417 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 11:53:30.600393 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 11:53:30.600618 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 11:53:30.600715 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 11:53:30.605356 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 11:53:30.605771 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 11:53:30.607501 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 11:53:30.607554 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 11:53:30.610354 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 11:53:30.611242 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 11:53:30.611295 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 11:53:30.613142 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 11:53:30.613183 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 11:53:30.615604 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 11:53:30.615646 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 11:53:30.617356 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 11:53:30.622412 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 11:53:30.628827 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 11:53:30.629565 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 11:53:30.634145 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 11:53:30.634280 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 11:53:30.636318 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 11:53:30.636356 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 11:53:30.638060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 11:53:30.638104 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 11:53:30.639672 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 11:53:30.639716 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 11:53:30.642314 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 11:53:30.642361 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 11:53:30.644917 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 11:53:30.644967 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 11:53:30.648115 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 11:53:30.649164 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 11:53:30.649224 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 11:53:30.652014 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 11:53:30.652054 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 11:53:30.654884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 11:53:30.654925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:53:30.675026 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 11:53:30.675169 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 11:53:30.677329 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 11:53:30.679617 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 11:53:30.712638 systemd[1]: Switching root. May 15 11:53:30.741529 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 15 11:53:30.741575 systemd-journald[244]: Journal stopped May 15 11:53:31.550879 kernel: SELinux: policy capability network_peer_controls=1 May 15 11:53:31.550938 kernel: SELinux: policy capability open_perms=1 May 15 11:53:31.550950 kernel: SELinux: policy capability extended_socket_class=1 May 15 11:53:31.550960 kernel: SELinux: policy capability always_check_network=0 May 15 11:53:31.550974 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 11:53:31.550983 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 11:53:31.550993 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 11:53:31.551004 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 11:53:31.551017 kernel: SELinux: policy capability userspace_initial_context=0 May 15 11:53:31.551026 kernel: audit: type=1403 audit(1747310010.906:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 11:53:31.551038 systemd[1]: Successfully loaded SELinux policy in 50.385ms. May 15 11:53:31.551058 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.345ms. May 15 11:53:31.551079 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 11:53:31.551093 systemd[1]: Detected virtualization kvm. May 15 11:53:31.551103 systemd[1]: Detected architecture arm64. May 15 11:53:31.551113 systemd[1]: Detected first boot. May 15 11:53:31.551124 systemd[1]: Initializing machine ID from VM UUID. May 15 11:53:31.551134 zram_generator::config[1080]: No configuration found. May 15 11:53:31.551144 kernel: NET: Registered PF_VSOCK protocol family May 15 11:53:31.551154 systemd[1]: Populated /etc with preset unit settings. May 15 11:53:31.551167 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 11:53:31.551178 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 11:53:31.551189 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 11:53:31.551199 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 11:53:31.551209 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 11:53:31.551222 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 11:53:31.551232 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 11:53:31.551242 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 11:53:31.551255 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 11:53:31.551266 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 11:53:31.551277 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 11:53:31.551289 systemd[1]: Created slice user.slice - User and Session Slice. May 15 11:53:31.551300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 11:53:31.551311 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 11:53:31.551322 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 11:53:31.551333 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 11:53:31.551345 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 11:53:31.551358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 11:53:31.551368 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 11:53:31.551378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 11:53:31.551389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 11:53:31.551400 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 11:53:31.551411 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 11:53:31.551424 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 11:53:31.551435 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 11:53:31.551447 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 11:53:31.551458 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 11:53:31.551469 systemd[1]: Reached target slices.target - Slice Units. May 15 11:53:31.551479 systemd[1]: Reached target swap.target - Swaps. May 15 11:53:31.551560 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 11:53:31.551586 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 11:53:31.551597 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 11:53:31.551608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 11:53:31.551620 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 11:53:31.551633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 11:53:31.551644 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 11:53:31.551655 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 11:53:31.551665 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 11:53:31.551676 systemd[1]: Mounting media.mount - External Media Directory... May 15 11:53:31.551687 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 11:53:31.551698 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 11:53:31.551712 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 11:53:31.551723 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 11:53:31.551735 systemd[1]: Reached target machines.target - Containers. May 15 11:53:31.551745 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 11:53:31.551756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 11:53:31.551766 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 11:53:31.551777 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 11:53:31.551787 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 11:53:31.551797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 11:53:31.551808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 11:53:31.551819 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 11:53:31.551829 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 11:53:31.551840 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 11:53:31.551852 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 11:53:31.551862 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 11:53:31.551873 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 11:53:31.551883 systemd[1]: Stopped systemd-fsck-usr.service. May 15 11:53:31.551895 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 11:53:31.551907 kernel: loop: module loaded May 15 11:53:31.551917 kernel: fuse: init (API version 7.41) May 15 11:53:31.551927 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 11:53:31.551937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 11:53:31.551948 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 11:53:31.551958 kernel: ACPI: bus type drm_connector registered May 15 11:53:31.551968 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 11:53:31.551979 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 11:53:31.551989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 11:53:31.552003 systemd[1]: verity-setup.service: Deactivated successfully. May 15 11:53:31.552013 systemd[1]: Stopped verity-setup.service. May 15 11:53:31.552024 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 11:53:31.552059 systemd-journald[1148]: Collecting audit messages is disabled. May 15 11:53:31.552094 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 11:53:31.552106 systemd[1]: Mounted media.mount - External Media Directory. May 15 11:53:31.552118 systemd-journald[1148]: Journal started May 15 11:53:31.552139 systemd-journald[1148]: Runtime Journal (/run/log/journal/d2e42ef6a2064bee8e1deb7ae94614e7) is 6M, max 48.5M, 42.4M free. May 15 11:53:31.560553 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 11:53:31.560591 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 11:53:31.560605 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 11:53:31.302974 systemd[1]: Queued start job for default target multi-user.target. May 15 11:53:31.328692 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 11:53:31.329091 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 11:53:31.564303 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 11:53:31.566507 systemd[1]: Started systemd-journald.service - Journal Service. May 15 11:53:31.568563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 11:53:31.569999 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 11:53:31.570185 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 11:53:31.571700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 11:53:31.571876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 11:53:31.573362 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 11:53:31.573557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 11:53:31.574815 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 11:53:31.575017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 11:53:31.576441 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 11:53:31.576654 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 11:53:31.577825 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 11:53:31.577983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 11:53:31.579390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 11:53:31.580785 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 11:53:31.582214 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 11:53:31.583947 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 11:53:31.597508 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 11:53:31.600080 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 11:53:31.602378 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 11:53:31.603680 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 11:53:31.603720 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 11:53:31.605723 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 11:53:31.614417 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 11:53:31.615656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 11:53:31.617006 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 11:53:31.618993 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 11:53:31.620379 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 11:53:31.623654 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 11:53:31.624782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 11:53:31.627061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 11:53:31.630291 systemd-journald[1148]: Time spent on flushing to /var/log/journal/d2e42ef6a2064bee8e1deb7ae94614e7 is 23.978ms for 877 entries. May 15 11:53:31.630291 systemd-journald[1148]: System Journal (/var/log/journal/d2e42ef6a2064bee8e1deb7ae94614e7) is 8M, max 195.6M, 187.6M free. May 15 11:53:31.674612 systemd-journald[1148]: Received client request to flush runtime journal. May 15 11:53:31.674659 kernel: loop0: detected capacity change from 0 to 138376 May 15 11:53:31.630211 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 11:53:31.633417 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 11:53:31.637527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 11:53:31.639948 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 11:53:31.643553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 11:53:31.664895 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 11:53:31.667015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 11:53:31.668434 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 11:53:31.670891 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 11:53:31.682932 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 11:53:31.686187 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 11:53:31.693394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 11:53:31.695889 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 11:53:31.705548 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 11:53:31.712517 kernel: loop1: detected capacity change from 0 to 201592 May 15 11:53:31.720630 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 15 11:53:31.720650 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 15 11:53:31.724795 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 11:53:31.748534 kernel: loop2: detected capacity change from 0 to 107312 May 15 11:53:31.778539 kernel: loop3: detected capacity change from 0 to 138376 May 15 11:53:31.784512 kernel: loop4: detected capacity change from 0 to 201592 May 15 11:53:31.790523 kernel: loop5: detected capacity change from 0 to 107312 May 15 11:53:31.793145 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 11:53:31.793535 (sd-merge)[1219]: Merged extensions into '/usr'. May 15 11:53:31.796790 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... May 15 11:53:31.796804 systemd[1]: Reloading... May 15 11:53:31.852530 zram_generator::config[1244]: No configuration found. May 15 11:53:31.924878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:53:31.926965 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 11:53:31.995234 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 11:53:31.995426 systemd[1]: Reloading finished in 198 ms. May 15 11:53:32.023199 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 11:53:32.025351 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 11:53:32.043880 systemd[1]: Starting ensure-sysext.service... May 15 11:53:32.045544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 11:53:32.059456 systemd[1]: Reload requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... May 15 11:53:32.059470 systemd[1]: Reloading... May 15 11:53:32.064587 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 11:53:32.064863 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 11:53:32.065183 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 11:53:32.065453 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 11:53:32.066174 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 11:53:32.066468 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 15 11:53:32.066663 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 15 11:53:32.069192 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. May 15 11:53:32.069284 systemd-tmpfiles[1280]: Skipping /boot May 15 11:53:32.078426 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. May 15 11:53:32.078441 systemd-tmpfiles[1280]: Skipping /boot May 15 11:53:32.115512 zram_generator::config[1310]: No configuration found. May 15 11:53:32.187574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:53:32.258089 systemd[1]: Reloading finished in 198 ms. May 15 11:53:32.281306 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 11:53:32.303696 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 11:53:32.310993 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 11:53:32.313236 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 11:53:32.323233 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 11:53:32.326776 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 11:53:32.329802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 11:53:32.331999 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 11:53:32.339693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 11:53:32.341701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 11:53:32.344238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 11:53:32.347503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 11:53:32.348483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 11:53:32.348674 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 11:53:32.350112 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 11:53:32.358847 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 11:53:32.360975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 11:53:32.361172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 11:53:32.364002 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 11:53:32.364154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 11:53:32.366159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 11:53:32.366515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 11:53:32.371090 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 11:53:32.371693 systemd-udevd[1348]: Using default interface naming scheme 'v255'. May 15 11:53:32.379255 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 11:53:32.381980 systemd[1]: Finished ensure-sysext.service. May 15 11:53:32.386022 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 11:53:32.387279 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 11:53:32.389634 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 11:53:32.397045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 11:53:32.401568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 11:53:32.402709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 11:53:32.402751 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 11:53:32.407582 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 11:53:32.409858 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 11:53:32.412642 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 11:53:32.412908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 11:53:32.414253 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 11:53:32.415843 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 11:53:32.416440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 11:53:32.431932 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 11:53:32.434915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 11:53:32.435096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 11:53:32.438659 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 11:53:32.439561 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 11:53:32.442879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 11:53:32.443316 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 11:53:32.456408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 11:53:32.456464 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 11:53:32.461449 augenrules[1427]: No rules May 15 11:53:32.464822 systemd[1]: audit-rules.service: Deactivated successfully. May 15 11:53:32.465041 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 11:53:32.475823 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 11:53:32.502428 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 11:53:32.515172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 11:53:32.522437 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 11:53:32.556624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 11:53:32.591803 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 11:53:32.593478 systemd[1]: Reached target time-set.target - System Time Set. May 15 11:53:32.613640 systemd-networkd[1414]: lo: Link UP May 15 11:53:32.613651 systemd-networkd[1414]: lo: Gained carrier May 15 11:53:32.614586 systemd-networkd[1414]: Enumeration completed May 15 11:53:32.614702 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 11:53:32.620680 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 11:53:32.622778 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:53:32.622809 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 11:53:32.623274 systemd-networkd[1414]: eth0: Link UP May 15 11:53:32.623403 systemd-networkd[1414]: eth0: Gained carrier May 15 11:53:32.623417 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:53:32.624104 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 11:53:32.654531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:53:32.658556 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 11:53:32.659747 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. May 15 11:53:32.660399 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 11:53:32.660464 systemd-timesyncd[1402]: Initial clock synchronization to Thu 2025-05-15 11:53:32.610798 UTC. May 15 11:53:32.668987 systemd-resolved[1346]: Positive Trust Anchors: May 15 11:53:32.669006 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 11:53:32.669039 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 11:53:32.675550 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 11:53:32.680015 systemd-resolved[1346]: Defaulting to hostname 'linux'. May 15 11:53:32.684383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 11:53:32.685965 systemd[1]: Reached target network.target - Network. May 15 11:53:32.686943 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 11:53:32.706571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:53:32.707837 systemd[1]: Reached target sysinit.target - System Initialization. May 15 11:53:32.708987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 11:53:32.710133 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 11:53:32.711407 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 11:53:32.712468 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 11:53:32.713656 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 11:53:32.714720 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 11:53:32.714754 systemd[1]: Reached target paths.target - Path Units. May 15 11:53:32.715481 systemd[1]: Reached target timers.target - Timer Units. May 15 11:53:32.717299 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 11:53:32.719563 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 11:53:32.722584 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 11:53:32.723876 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 11:53:32.725022 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 11:53:32.731362 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 11:53:32.732729 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 11:53:32.734251 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 11:53:32.735344 systemd[1]: Reached target sockets.target - Socket Units. May 15 11:53:32.736344 systemd[1]: Reached target basic.target - Basic System. May 15 11:53:32.737349 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 11:53:32.737379 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 11:53:32.738355 systemd[1]: Starting containerd.service - containerd container runtime... May 15 11:53:32.740231 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 11:53:32.741998 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 11:53:32.743940 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 11:53:32.745714 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 11:53:32.746730 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 11:53:32.747707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 11:53:32.749600 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 11:53:32.753934 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 11:53:32.757814 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 11:53:32.761018 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 11:53:32.762971 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 11:53:32.767622 jq[1471]: false May 15 11:53:32.763412 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 11:53:32.765735 systemd[1]: Starting update-engine.service - Update Engine... May 15 11:53:32.767984 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 11:53:32.771985 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 11:53:32.775669 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 11:53:32.775872 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 11:53:32.776708 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 11:53:32.776871 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 11:53:32.779134 jq[1483]: true May 15 11:53:32.779271 extend-filesystems[1472]: Found loop3 May 15 11:53:32.779271 extend-filesystems[1472]: Found loop4 May 15 11:53:32.779271 extend-filesystems[1472]: Found loop5 May 15 11:53:32.779271 extend-filesystems[1472]: Found vda May 15 11:53:32.779271 extend-filesystems[1472]: Found vda1 May 15 11:53:32.779271 extend-filesystems[1472]: Found vda2 May 15 11:53:32.779271 extend-filesystems[1472]: Found vda3 May 15 11:53:32.779271 extend-filesystems[1472]: Found usr May 15 11:53:32.779271 extend-filesystems[1472]: Found vda4 May 15 11:53:32.779271 extend-filesystems[1472]: Found vda6 May 15 11:53:32.779271 extend-filesystems[1472]: Found vda7 May 15 11:53:32.779271 extend-filesystems[1472]: Found vda9 May 15 11:53:32.779271 extend-filesystems[1472]: Checking size of /dev/vda9 May 15 11:53:32.789310 systemd[1]: motdgen.service: Deactivated successfully. May 15 11:53:32.792840 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 11:53:32.793732 jq[1492]: true May 15 11:53:32.809926 extend-filesystems[1472]: Resized partition /dev/vda9 May 15 11:53:32.812812 extend-filesystems[1506]: resize2fs 1.47.2 (1-Jan-2025) May 15 11:53:32.824399 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 11:53:32.832460 dbus-daemon[1469]: [system] SELinux support is enabled May 15 11:53:32.831012 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 11:53:32.845740 update_engine[1481]: I20250515 11:53:32.832114 1481 main.cc:92] Flatcar Update Engine starting May 15 11:53:32.832654 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 11:53:32.837352 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 11:53:32.837375 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 11:53:32.840728 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 11:53:32.840745 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 11:53:32.852712 update_engine[1481]: I20250515 11:53:32.851718 1481 update_check_scheduler.cc:74] Next update check in 7m37s May 15 11:53:32.854169 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 11:53:32.853860 systemd[1]: Started update-engine.service - Update Engine. May 15 11:53:32.861722 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 11:53:32.864949 extend-filesystems[1506]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 11:53:32.864949 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 11:53:32.864949 extend-filesystems[1506]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 11:53:32.875538 extend-filesystems[1472]: Resized filesystem in /dev/vda9 May 15 11:53:32.876696 tar[1490]: linux-arm64/LICENSE May 15 11:53:32.876696 tar[1490]: linux-arm64/helm May 15 11:53:32.870633 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 11:53:32.878230 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 11:53:32.883523 bash[1526]: Updated "/home/core/.ssh/authorized_keys" May 15 11:53:32.886681 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 11:53:32.889085 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 11:53:32.901253 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (Power Button) May 15 11:53:32.903608 systemd-logind[1480]: New seat seat0. May 15 11:53:32.907043 systemd[1]: Started systemd-logind.service - User Login Management. May 15 11:53:32.944439 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 11:53:33.069089 containerd[1508]: time="2025-05-15T11:53:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 11:53:33.072174 containerd[1508]: time="2025-05-15T11:53:33.072137228Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 11:53:33.085651 containerd[1508]: time="2025-05-15T11:53:33.085604953Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="51.44µs" May 15 11:53:33.085651 containerd[1508]: time="2025-05-15T11:53:33.085641417Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 11:53:33.085725 containerd[1508]: time="2025-05-15T11:53:33.085663543Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 11:53:33.085882 containerd[1508]: time="2025-05-15T11:53:33.085846739Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 11:53:33.085954 containerd[1508]: time="2025-05-15T11:53:33.085872619Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 11:53:33.085978 containerd[1508]: time="2025-05-15T11:53:33.085965914Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 11:53:33.086091 containerd[1508]: time="2025-05-15T11:53:33.086023185Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 11:53:33.086115 containerd[1508]: time="2025-05-15T11:53:33.086086966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 11:53:33.086441 containerd[1508]: time="2025-05-15T11:53:33.086411184Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 11:53:33.086441 containerd[1508]: time="2025-05-15T11:53:33.086436784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 11:53:33.086487 containerd[1508]: time="2025-05-15T11:53:33.086460827Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 11:53:33.086487 containerd[1508]: time="2025-05-15T11:53:33.086469493Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 11:53:33.086632 containerd[1508]: time="2025-05-15T11:53:33.086611873Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 11:53:33.086936 containerd[1508]: time="2025-05-15T11:53:33.086903860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 11:53:33.086964 containerd[1508]: time="2025-05-15T11:53:33.086953783Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 11:53:33.086989 containerd[1508]: time="2025-05-15T11:53:33.086965285Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 11:53:33.087231 containerd[1508]: time="2025-05-15T11:53:33.087208508Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 11:53:33.087623 containerd[1508]: time="2025-05-15T11:53:33.087594749Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 11:53:33.087695 containerd[1508]: time="2025-05-15T11:53:33.087680496Z" level=info msg="metadata content store policy set" policy=shared May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092674356Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092728352Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092745486Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092757268Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092769928Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092780711Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092792613Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092804075Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092816935Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092827519Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092837144Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092849604Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.092983277Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 11:53:33.094515 containerd[1508]: time="2025-05-15T11:53:33.093005203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093019182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093031363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093045740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093062195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093073258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093083881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093098579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093110241Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093122422Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093299787Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093313486Z" level=info msg="Start snapshots syncer" May 15 11:53:33.094798 containerd[1508]: time="2025-05-15T11:53:33.093341921Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 11:53:33.095013 containerd[1508]: time="2025-05-15T11:53:33.093565535Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 11:53:33.095013 containerd[1508]: time="2025-05-15T11:53:33.093613101Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093693137Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093797415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093822417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093834358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093844103Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093856364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093866668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093877012Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093900056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093910480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093933445Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093976059Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093990556Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 11:53:33.095111 containerd[1508]: time="2025-05-15T11:53:33.093999702Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094008848Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094016755Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094026341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094036245Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094113965Z" level=info msg="runtime interface created" May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094121234Z" level=info msg="created NRI interface" May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094129501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094140723Z" level=info msg="Connect containerd service" May 15 11:53:33.095334 containerd[1508]: time="2025-05-15T11:53:33.094168121Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 11:53:33.096035 containerd[1508]: time="2025-05-15T11:53:33.096001799Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 11:53:33.207545 containerd[1508]: time="2025-05-15T11:53:33.207473143Z" level=info msg="Start subscribing containerd event" May 15 11:53:33.207545 containerd[1508]: time="2025-05-15T11:53:33.207554297Z" level=info msg="Start recovering state" May 15 11:53:33.207733 containerd[1508]: time="2025-05-15T11:53:33.207716166Z" level=info msg="Start event monitor" May 15 11:53:33.207902 containerd[1508]: time="2025-05-15T11:53:33.207883427Z" level=info msg="Start cni network conf syncer for default" May 15 11:53:33.207929 containerd[1508]: time="2025-05-15T11:53:33.207903316Z" level=info msg="Start streaming server" May 15 11:53:33.207929 containerd[1508]: time="2025-05-15T11:53:33.207914299Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 11:53:33.207929 containerd[1508]: time="2025-05-15T11:53:33.207921528Z" level=info msg="runtime interface starting up..." May 15 11:53:33.207929 containerd[1508]: time="2025-05-15T11:53:33.207927039Z" level=info msg="starting plugins..." May 15 11:53:33.208297 containerd[1508]: time="2025-05-15T11:53:33.208275299Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 11:53:33.208477 containerd[1508]: time="2025-05-15T11:53:33.208458655Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 11:53:33.208676 containerd[1508]: time="2025-05-15T11:53:33.208655430Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 11:53:33.208928 containerd[1508]: time="2025-05-15T11:53:33.208910395Z" level=info msg="containerd successfully booted in 0.140215s" May 15 11:53:33.209039 systemd[1]: Started containerd.service - containerd container runtime. May 15 11:53:33.276158 tar[1490]: linux-arm64/README.md May 15 11:53:33.291627 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 11:53:33.579329 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 11:53:33.599099 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 11:53:33.602029 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 11:53:33.627072 systemd[1]: issuegen.service: Deactivated successfully. May 15 11:53:33.627353 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 11:53:33.632110 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 11:53:33.664605 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 11:53:33.667869 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 11:53:33.669990 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 11:53:33.671591 systemd[1]: Reached target getty.target - Login Prompts. May 15 11:53:33.804612 systemd-networkd[1414]: eth0: Gained IPv6LL May 15 11:53:33.808066 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 11:53:33.810803 systemd[1]: Reached target network-online.target - Network is Online. May 15 11:53:33.813308 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 11:53:33.815683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:53:33.831937 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 11:53:33.852946 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 11:53:33.854876 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 11:53:33.855342 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 11:53:33.857480 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 11:53:34.407923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:53:34.409444 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 11:53:34.410643 systemd[1]: Startup finished in 2.185s (kernel) + 5.262s (initrd) + 3.557s (userspace) = 11.005s. May 15 11:53:34.412146 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 11:53:34.816320 kubelet[1597]: E0515 11:53:34.816200 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 11:53:34.818422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 11:53:34.818569 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 11:53:34.818880 systemd[1]: kubelet.service: Consumed 775ms CPU time, 246.4M memory peak. May 15 11:53:38.761970 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 11:53:38.763042 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:36792.service - OpenSSH per-connection server daemon (10.0.0.1:36792). May 15 11:53:38.846841 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 36792 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:53:38.848423 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:53:38.856031 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 11:53:38.856957 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 11:53:38.862064 systemd-logind[1480]: New session 1 of user core. May 15 11:53:38.873889 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 11:53:38.876267 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 11:53:38.900437 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 11:53:38.902730 systemd-logind[1480]: New session c1 of user core. May 15 11:53:39.007561 systemd[1614]: Queued start job for default target default.target. May 15 11:53:39.024485 systemd[1614]: Created slice app.slice - User Application Slice. May 15 11:53:39.024549 systemd[1614]: Reached target paths.target - Paths. May 15 11:53:39.024594 systemd[1614]: Reached target timers.target - Timers. May 15 11:53:39.025801 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 11:53:39.034260 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 11:53:39.034308 systemd[1614]: Reached target sockets.target - Sockets. May 15 11:53:39.034342 systemd[1614]: Reached target basic.target - Basic System. May 15 11:53:39.034373 systemd[1614]: Reached target default.target - Main User Target. May 15 11:53:39.034413 systemd[1614]: Startup finished in 126ms. May 15 11:53:39.034598 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 11:53:39.036257 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 11:53:39.093709 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:36796.service - OpenSSH per-connection server daemon (10.0.0.1:36796). May 15 11:53:39.158324 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 36796 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:53:39.159631 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:53:39.164089 systemd-logind[1480]: New session 2 of user core. May 15 11:53:39.176628 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 11:53:39.227589 sshd[1627]: Connection closed by 10.0.0.1 port 36796 May 15 11:53:39.228016 sshd-session[1625]: pam_unix(sshd:session): session closed for user core May 15 11:53:39.248464 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:36796.service: Deactivated successfully. May 15 11:53:39.249778 systemd[1]: session-2.scope: Deactivated successfully. May 15 11:53:39.251133 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. May 15 11:53:39.255183 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:36802.service - OpenSSH per-connection server daemon (10.0.0.1:36802). May 15 11:53:39.255777 systemd-logind[1480]: Removed session 2. May 15 11:53:39.315170 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 36802 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:53:39.316208 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:53:39.320561 systemd-logind[1480]: New session 3 of user core. May 15 11:53:39.333631 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 11:53:39.380536 sshd[1635]: Connection closed by 10.0.0.1 port 36802 May 15 11:53:39.380803 sshd-session[1633]: pam_unix(sshd:session): session closed for user core May 15 11:53:39.390414 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:36802.service: Deactivated successfully. May 15 11:53:39.391745 systemd[1]: session-3.scope: Deactivated successfully. May 15 11:53:39.393666 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. May 15 11:53:39.394775 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:36810.service - OpenSSH per-connection server daemon (10.0.0.1:36810). May 15 11:53:39.395615 systemd-logind[1480]: Removed session 3. May 15 11:53:39.438081 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 36810 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:53:39.439173 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:53:39.443741 systemd-logind[1480]: New session 4 of user core. May 15 11:53:39.456643 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 11:53:39.506362 sshd[1643]: Connection closed by 10.0.0.1 port 36810 May 15 11:53:39.506707 sshd-session[1641]: pam_unix(sshd:session): session closed for user core May 15 11:53:39.516400 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:36810.service: Deactivated successfully. May 15 11:53:39.518650 systemd[1]: session-4.scope: Deactivated successfully. May 15 11:53:39.519332 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. May 15 11:53:39.522711 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:36826.service - OpenSSH per-connection server daemon (10.0.0.1:36826). May 15 11:53:39.523137 systemd-logind[1480]: Removed session 4. May 15 11:53:39.566321 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 36826 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:53:39.566990 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:53:39.570917 systemd-logind[1480]: New session 5 of user core. May 15 11:53:39.581630 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 11:53:39.638075 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 11:53:39.640171 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 11:53:39.651155 sudo[1652]: pam_unix(sudo:session): session closed for user root May 15 11:53:39.652505 sshd[1651]: Connection closed by 10.0.0.1 port 36826 May 15 11:53:39.652965 sshd-session[1649]: pam_unix(sshd:session): session closed for user core May 15 11:53:39.663531 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:36826.service: Deactivated successfully. May 15 11:53:39.664930 systemd[1]: session-5.scope: Deactivated successfully. May 15 11:53:39.665526 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. May 15 11:53:39.667756 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:36828.service - OpenSSH per-connection server daemon (10.0.0.1:36828). May 15 11:53:39.668358 systemd-logind[1480]: Removed session 5. May 15 11:53:39.734209 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 36828 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:53:39.735344 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:53:39.739683 systemd-logind[1480]: New session 6 of user core. May 15 11:53:39.758639 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 11:53:39.808651 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 11:53:39.808923 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 11:53:39.885094 sudo[1663]: pam_unix(sudo:session): session closed for user root May 15 11:53:39.889955 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 11:53:39.890196 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 11:53:39.897960 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 11:53:39.937530 augenrules[1685]: No rules May 15 11:53:39.940340 sudo[1662]: pam_unix(sudo:session): session closed for user root May 15 11:53:39.938122 systemd[1]: audit-rules.service: Deactivated successfully. May 15 11:53:39.939545 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 11:53:39.942551 sshd[1661]: Connection closed by 10.0.0.1 port 36828 May 15 11:53:39.941847 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 15 11:53:39.951326 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:36828.service: Deactivated successfully. May 15 11:53:39.952618 systemd[1]: session-6.scope: Deactivated successfully. May 15 11:53:39.953211 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. May 15 11:53:39.955385 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:36838.service - OpenSSH per-connection server daemon (10.0.0.1:36838). May 15 11:53:39.957564 systemd-logind[1480]: Removed session 6. May 15 11:53:40.007894 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 36838 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:53:40.008933 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:53:40.013210 systemd-logind[1480]: New session 7 of user core. May 15 11:53:40.026628 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 11:53:40.077242 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 11:53:40.077523 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 11:53:40.421182 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 11:53:40.446785 (dockerd)[1718]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 11:53:40.742581 dockerd[1718]: time="2025-05-15T11:53:40.742452171Z" level=info msg="Starting up" May 15 11:53:40.743810 dockerd[1718]: time="2025-05-15T11:53:40.743779125Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 11:53:40.808995 dockerd[1718]: time="2025-05-15T11:53:40.808842680Z" level=info msg="Loading containers: start." May 15 11:53:40.819171 kernel: Initializing XFRM netlink socket May 15 11:53:41.027465 systemd-networkd[1414]: docker0: Link UP May 15 11:53:41.031575 dockerd[1718]: time="2025-05-15T11:53:41.031536030Z" level=info msg="Loading containers: done." May 15 11:53:41.052932 dockerd[1718]: time="2025-05-15T11:53:41.052878938Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 11:53:41.053053 dockerd[1718]: time="2025-05-15T11:53:41.052955726Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 11:53:41.053085 dockerd[1718]: time="2025-05-15T11:53:41.053052450Z" level=info msg="Initializing buildkit" May 15 11:53:41.074048 dockerd[1718]: time="2025-05-15T11:53:41.073996038Z" level=info msg="Completed buildkit initialization" May 15 11:53:41.081229 dockerd[1718]: time="2025-05-15T11:53:41.081190951Z" level=info msg="Daemon has completed initialization" May 15 11:53:41.081325 dockerd[1718]: time="2025-05-15T11:53:41.081258549Z" level=info msg="API listen on /run/docker.sock" May 15 11:53:41.082520 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 11:53:41.765060 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck250896082-merged.mount: Deactivated successfully. May 15 11:53:42.035509 containerd[1508]: time="2025-05-15T11:53:42.035385759Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 11:53:42.669587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4152403179.mount: Deactivated successfully. May 15 11:53:43.772507 containerd[1508]: time="2025-05-15T11:53:43.772269356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:43.775520 containerd[1508]: time="2025-05-15T11:53:43.775424078Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 15 11:53:43.776483 containerd[1508]: time="2025-05-15T11:53:43.776440652Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:43.779650 containerd[1508]: time="2025-05-15T11:53:43.779271779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:43.780217 containerd[1508]: time="2025-05-15T11:53:43.780188065Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.744759116s" May 15 11:53:43.780296 containerd[1508]: time="2025-05-15T11:53:43.780281999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 15 11:53:43.781019 containerd[1508]: time="2025-05-15T11:53:43.780982889Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 11:53:44.893482 containerd[1508]: time="2025-05-15T11:53:44.893415128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:44.894931 containerd[1508]: time="2025-05-15T11:53:44.894896469Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 15 11:53:44.895860 containerd[1508]: time="2025-05-15T11:53:44.895825214Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:44.898799 containerd[1508]: time="2025-05-15T11:53:44.898773393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:44.900566 containerd[1508]: time="2025-05-15T11:53:44.900535228Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.119522733s" May 15 11:53:44.900670 containerd[1508]: time="2025-05-15T11:53:44.900652141Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 15 11:53:44.901128 containerd[1508]: time="2025-05-15T11:53:44.901100531Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 11:53:45.068918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 11:53:45.070295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:53:45.201773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:53:45.204721 (kubelet)[1994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 11:53:45.239592 kubelet[1994]: E0515 11:53:45.239538 1994 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 11:53:45.242592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 11:53:45.242817 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 11:53:45.243500 systemd[1]: kubelet.service: Consumed 130ms CPU time, 101M memory peak. May 15 11:53:46.136170 containerd[1508]: time="2025-05-15T11:53:46.135739995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:46.136486 containerd[1508]: time="2025-05-15T11:53:46.136301620Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 15 11:53:46.137108 containerd[1508]: time="2025-05-15T11:53:46.137072309Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:46.139935 containerd[1508]: time="2025-05-15T11:53:46.139902967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:46.140936 containerd[1508]: time="2025-05-15T11:53:46.140886359Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.239753942s" May 15 11:53:46.140936 containerd[1508]: time="2025-05-15T11:53:46.140916568Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 15 11:53:46.141444 containerd[1508]: time="2025-05-15T11:53:46.141415417Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 11:53:47.123584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882619856.mount: Deactivated successfully. May 15 11:53:47.340418 containerd[1508]: time="2025-05-15T11:53:47.340360895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:47.340932 containerd[1508]: time="2025-05-15T11:53:47.340902557Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 15 11:53:47.341745 containerd[1508]: time="2025-05-15T11:53:47.341716429Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:47.343353 containerd[1508]: time="2025-05-15T11:53:47.343323673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:47.344258 containerd[1508]: time="2025-05-15T11:53:47.344230532Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.202782309s" May 15 11:53:47.344298 containerd[1508]: time="2025-05-15T11:53:47.344260662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 15 11:53:47.344774 containerd[1508]: time="2025-05-15T11:53:47.344741384Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 11:53:47.910012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590598828.mount: Deactivated successfully. May 15 11:53:49.538064 containerd[1508]: time="2025-05-15T11:53:49.537999650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:49.538677 containerd[1508]: time="2025-05-15T11:53:49.538642851Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 15 11:53:49.539308 containerd[1508]: time="2025-05-15T11:53:49.539283734Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:49.545515 containerd[1508]: time="2025-05-15T11:53:49.545430924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:49.546507 containerd[1508]: time="2025-05-15T11:53:49.546450294Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.201671267s" May 15 11:53:49.546507 containerd[1508]: time="2025-05-15T11:53:49.546486340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 11:53:49.546977 containerd[1508]: time="2025-05-15T11:53:49.546951547Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 11:53:49.961413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785801457.mount: Deactivated successfully. May 15 11:53:49.966463 containerd[1508]: time="2025-05-15T11:53:49.966413067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 11:53:49.966900 containerd[1508]: time="2025-05-15T11:53:49.966858532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 11:53:49.967586 containerd[1508]: time="2025-05-15T11:53:49.967551646Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 11:53:49.969589 containerd[1508]: time="2025-05-15T11:53:49.969539314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 11:53:49.970225 containerd[1508]: time="2025-05-15T11:53:49.970091559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 423.10956ms" May 15 11:53:49.970225 containerd[1508]: time="2025-05-15T11:53:49.970123369Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 11:53:49.970836 containerd[1508]: time="2025-05-15T11:53:49.970764771Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 11:53:50.484258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948392079.mount: Deactivated successfully. May 15 11:53:52.245406 containerd[1508]: time="2025-05-15T11:53:52.245331342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:52.245843 containerd[1508]: time="2025-05-15T11:53:52.245663581Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 15 11:53:52.246651 containerd[1508]: time="2025-05-15T11:53:52.246624447Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:52.249800 containerd[1508]: time="2025-05-15T11:53:52.249759990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:53:52.250885 containerd[1508]: time="2025-05-15T11:53:52.250852224Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.280056521s" May 15 11:53:52.250922 containerd[1508]: time="2025-05-15T11:53:52.250885356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 15 11:53:55.493079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 11:53:55.496793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:53:55.687795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:53:55.691644 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 11:53:55.725643 kubelet[2154]: E0515 11:53:55.725562 2154 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 11:53:55.728052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 11:53:55.728310 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 11:53:55.728716 systemd[1]: kubelet.service: Consumed 130ms CPU time, 102.5M memory peak. May 15 11:53:56.407223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:53:56.407374 systemd[1]: kubelet.service: Consumed 130ms CPU time, 102.5M memory peak. May 15 11:53:56.409349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:53:56.433133 systemd[1]: Reload requested from client PID 2171 ('systemctl') (unit session-7.scope)... May 15 11:53:56.433152 systemd[1]: Reloading... May 15 11:53:56.510532 zram_generator::config[2217]: No configuration found. May 15 11:53:56.648073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:53:56.740947 systemd[1]: Reloading finished in 307 ms. May 15 11:53:56.780288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:53:56.782332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:53:56.784583 systemd[1]: kubelet.service: Deactivated successfully. May 15 11:53:56.784811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:53:56.784848 systemd[1]: kubelet.service: Consumed 85ms CPU time, 90.2M memory peak. May 15 11:53:56.786160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:53:56.903721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:53:56.908067 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 11:53:56.941127 kubelet[2261]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 11:53:56.941127 kubelet[2261]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 11:53:56.941127 kubelet[2261]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 11:53:56.941462 kubelet[2261]: I0515 11:53:56.941185 2261 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 11:53:57.535250 kubelet[2261]: I0515 11:53:57.535208 2261 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 11:53:57.535250 kubelet[2261]: I0515 11:53:57.535241 2261 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 11:53:57.535572 kubelet[2261]: I0515 11:53:57.535553 2261 server.go:954] "Client rotation is on, will bootstrap in background" May 15 11:53:57.806275 kubelet[2261]: E0515 11:53:57.806149 2261 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 15 11:53:57.807357 kubelet[2261]: I0515 11:53:57.807204 2261 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 11:53:57.815201 kubelet[2261]: I0515 11:53:57.815180 2261 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 11:53:57.818140 kubelet[2261]: I0515 11:53:57.818108 2261 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 11:53:57.818762 kubelet[2261]: I0515 11:53:57.818713 2261 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 11:53:57.818939 kubelet[2261]: I0515 11:53:57.818757 2261 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 11:53:57.819026 kubelet[2261]: I0515 11:53:57.819003 2261 topology_manager.go:138] "Creating topology manager with none policy" May 15 11:53:57.819026 kubelet[2261]: I0515 11:53:57.819014 2261 container_manager_linux.go:304] "Creating device plugin manager" May 15 11:53:57.819272 kubelet[2261]: I0515 11:53:57.819245 2261 state_mem.go:36] "Initialized new in-memory state store" May 15 11:53:57.822081 kubelet[2261]: I0515 11:53:57.822052 2261 kubelet.go:446] "Attempting to sync node with API server" May 15 11:53:57.822081 kubelet[2261]: I0515 11:53:57.822081 2261 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 11:53:57.822368 kubelet[2261]: I0515 11:53:57.822109 2261 kubelet.go:352] "Adding apiserver pod source" May 15 11:53:57.822368 kubelet[2261]: I0515 11:53:57.822126 2261 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 11:53:57.827841 kubelet[2261]: W0515 11:53:57.827786 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 15 11:53:57.828025 kubelet[2261]: E0515 11:53:57.828003 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 15 11:53:57.828186 kubelet[2261]: I0515 11:53:57.828056 2261 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 11:53:57.829511 kubelet[2261]: I0515 11:53:57.828935 2261 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 11:53:57.829511 kubelet[2261]: W0515 11:53:57.827783 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 15 11:53:57.829511 kubelet[2261]: E0515 11:53:57.829161 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 15 11:53:57.829511 kubelet[2261]: W0515 11:53:57.829106 2261 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 11:53:57.830417 kubelet[2261]: I0515 11:53:57.830396 2261 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 11:53:57.830529 kubelet[2261]: I0515 11:53:57.830519 2261 server.go:1287] "Started kubelet" May 15 11:53:57.832875 kubelet[2261]: I0515 11:53:57.832829 2261 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 11:53:57.835265 kubelet[2261]: I0515 11:53:57.835181 2261 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 11:53:57.835576 kubelet[2261]: I0515 11:53:57.835549 2261 server.go:490] "Adding debug handlers to kubelet server" May 15 11:53:57.835743 kubelet[2261]: I0515 11:53:57.835727 2261 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 11:53:57.836048 kubelet[2261]: I0515 11:53:57.836016 2261 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 11:53:57.841152 kubelet[2261]: I0515 11:53:57.839029 2261 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 11:53:57.841152 kubelet[2261]: E0515 11:53:57.840445 2261 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 11:53:57.841152 kubelet[2261]: I0515 11:53:57.840478 2261 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 11:53:57.841864 kubelet[2261]: I0515 11:53:57.841849 2261 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 11:53:57.842520 kubelet[2261]: W0515 11:53:57.842262 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 15 11:53:57.842520 kubelet[2261]: E0515 11:53:57.842314 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 15 11:53:57.842520 kubelet[2261]: I0515 11:53:57.842394 2261 reconciler.go:26] "Reconciler: start to sync state" May 15 11:53:57.842676 kubelet[2261]: E0515 11:53:57.842639 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" May 15 11:53:57.843518 kubelet[2261]: I0515 11:53:57.843441 2261 factory.go:221] Registration of the systemd container factory successfully May 15 11:53:57.843728 kubelet[2261]: I0515 11:53:57.843670 2261 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 11:53:57.845177 kubelet[2261]: E0515 11:53:57.845143 2261 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 11:53:57.845335 kubelet[2261]: I0515 11:53:57.845302 2261 factory.go:221] Registration of the containerd container factory successfully May 15 11:53:57.847612 kubelet[2261]: E0515 11:53:57.847299 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fb138c55c3a40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 11:53:57.830482496 +0000 UTC m=+0.919419498,LastTimestamp:2025-05-15 11:53:57.830482496 +0000 UTC m=+0.919419498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 11:53:57.855188 kubelet[2261]: E0515 11:53:57.855069 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fb138c55c3a40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 11:53:57.830482496 +0000 UTC m=+0.919419498,LastTimestamp:2025-05-15 11:53:57.830482496 +0000 UTC m=+0.919419498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 11:53:57.855435 kubelet[2261]: I0515 11:53:57.855414 2261 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 11:53:57.855435 kubelet[2261]: I0515 11:53:57.855432 2261 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 11:53:57.855525 kubelet[2261]: I0515 11:53:57.855447 2261 state_mem.go:36] "Initialized new in-memory state store" May 15 11:53:57.856407 kubelet[2261]: I0515 11:53:57.856373 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 11:53:57.857639 kubelet[2261]: I0515 11:53:57.857609 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 11:53:57.857639 kubelet[2261]: I0515 11:53:57.857634 2261 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 11:53:57.857748 kubelet[2261]: I0515 11:53:57.857653 2261 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 11:53:57.857748 kubelet[2261]: I0515 11:53:57.857634 2261 policy_none.go:49] "None policy: Start" May 15 11:53:57.857748 kubelet[2261]: I0515 11:53:57.857660 2261 kubelet.go:2388] "Starting kubelet main sync loop" May 15 11:53:57.857748 kubelet[2261]: I0515 11:53:57.857669 2261 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 11:53:57.857748 kubelet[2261]: I0515 11:53:57.857680 2261 state_mem.go:35] "Initializing new in-memory state store" May 15 11:53:57.857748 kubelet[2261]: E0515 11:53:57.857696 2261 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 11:53:57.858183 kubelet[2261]: W0515 11:53:57.858147 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 15 11:53:57.858211 kubelet[2261]: E0515 11:53:57.858196 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 15 11:53:57.865643 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 11:53:57.884988 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 11:53:57.888762 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 11:53:57.905425 kubelet[2261]: I0515 11:53:57.905400 2261 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 11:53:57.905666 kubelet[2261]: I0515 11:53:57.905649 2261 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 11:53:57.905714 kubelet[2261]: I0515 11:53:57.905670 2261 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 11:53:57.905955 kubelet[2261]: I0515 11:53:57.905941 2261 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 11:53:57.907010 kubelet[2261]: E0515 11:53:57.906990 2261 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 11:53:57.907058 kubelet[2261]: E0515 11:53:57.907031 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 11:53:57.967482 systemd[1]: Created slice kubepods-burstable-podae61a97e8b0316abc9598916d6b439c6.slice - libcontainer container kubepods-burstable-podae61a97e8b0316abc9598916d6b439c6.slice. May 15 11:53:57.991225 kubelet[2261]: E0515 11:53:57.991191 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:53:57.993975 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 11:53:58.004548 kubelet[2261]: E0515 11:53:58.004521 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:53:58.007123 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 11:53:58.008127 kubelet[2261]: I0515 11:53:58.008099 2261 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 11:53:58.009357 kubelet[2261]: E0515 11:53:58.009333 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:53:58.013377 kubelet[2261]: E0515 11:53:58.013332 2261 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" May 15 11:53:58.043697 kubelet[2261]: I0515 11:53:58.043656 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:53:58.043697 kubelet[2261]: I0515 11:53:58.043696 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:53:58.043785 kubelet[2261]: I0515 11:53:58.043716 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:53:58.043785 kubelet[2261]: I0515 11:53:58.043747 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 11:53:58.043785 kubelet[2261]: I0515 11:53:58.043764 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae61a97e8b0316abc9598916d6b439c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae61a97e8b0316abc9598916d6b439c6\") " pod="kube-system/kube-apiserver-localhost" May 15 11:53:58.043785 kubelet[2261]: I0515 11:53:58.043778 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae61a97e8b0316abc9598916d6b439c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae61a97e8b0316abc9598916d6b439c6\") " pod="kube-system/kube-apiserver-localhost" May 15 11:53:58.043888 kubelet[2261]: I0515 11:53:58.043792 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:53:58.043888 kubelet[2261]: E0515 11:53:58.043795 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" May 15 11:53:58.043888 kubelet[2261]: I0515 11:53:58.043806 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae61a97e8b0316abc9598916d6b439c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae61a97e8b0316abc9598916d6b439c6\") " pod="kube-system/kube-apiserver-localhost" May 15 11:53:58.043888 kubelet[2261]: I0515 11:53:58.043850 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:53:58.214521 kubelet[2261]: I0515 11:53:58.214449 2261 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 11:53:58.214807 kubelet[2261]: E0515 11:53:58.214782 2261 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" May 15 11:53:58.292993 containerd[1508]: time="2025-05-15T11:53:58.292953191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae61a97e8b0316abc9598916d6b439c6,Namespace:kube-system,Attempt:0,}" May 15 11:53:58.305927 containerd[1508]: time="2025-05-15T11:53:58.305891731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 11:53:58.311436 containerd[1508]: time="2025-05-15T11:53:58.310716073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 11:53:58.315484 containerd[1508]: time="2025-05-15T11:53:58.315439326Z" level=info msg="connecting to shim 71fc8bab26b1595bfc436a3a294053d55d93f4c369e3889ac269995bb003362f" address="unix:///run/containerd/s/2ce25e5bbb479ce0f8efb500122cf9570298994299543d7858a261c04b5b339a" namespace=k8s.io protocol=ttrpc version=3 May 15 11:53:58.339158 containerd[1508]: time="2025-05-15T11:53:58.339108552Z" level=info msg="connecting to shim 7e8bb81286a008fcdac02324f49be49a5a28cacdc1921c94a29c7e2041407dad" address="unix:///run/containerd/s/d5d506c613278774ebe14873c15f689e82bcb97a56d3b1f61db0dd85375df923" namespace=k8s.io protocol=ttrpc version=3 May 15 11:53:58.346609 containerd[1508]: time="2025-05-15T11:53:58.345961993Z" level=info msg="connecting to shim a002c24b70a2a9d6efa689097e1d1027411cbbf5005b56dcf1902a11d7eeace8" address="unix:///run/containerd/s/831952f2581b4af05736b22c8d9cb828c2090679f0314b654f032f37b97fe49b" namespace=k8s.io protocol=ttrpc version=3 May 15 11:53:58.350818 systemd[1]: Started cri-containerd-71fc8bab26b1595bfc436a3a294053d55d93f4c369e3889ac269995bb003362f.scope - libcontainer container 71fc8bab26b1595bfc436a3a294053d55d93f4c369e3889ac269995bb003362f. May 15 11:53:58.370666 systemd[1]: Started cri-containerd-7e8bb81286a008fcdac02324f49be49a5a28cacdc1921c94a29c7e2041407dad.scope - libcontainer container 7e8bb81286a008fcdac02324f49be49a5a28cacdc1921c94a29c7e2041407dad. May 15 11:53:58.374068 systemd[1]: Started cri-containerd-a002c24b70a2a9d6efa689097e1d1027411cbbf5005b56dcf1902a11d7eeace8.scope - libcontainer container a002c24b70a2a9d6efa689097e1d1027411cbbf5005b56dcf1902a11d7eeace8. May 15 11:53:58.411766 containerd[1508]: time="2025-05-15T11:53:58.411670141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae61a97e8b0316abc9598916d6b439c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"71fc8bab26b1595bfc436a3a294053d55d93f4c369e3889ac269995bb003362f\"" May 15 11:53:58.415890 containerd[1508]: time="2025-05-15T11:53:58.415704117Z" level=info msg="CreateContainer within sandbox \"71fc8bab26b1595bfc436a3a294053d55d93f4c369e3889ac269995bb003362f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 11:53:58.421352 containerd[1508]: time="2025-05-15T11:53:58.421317186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"a002c24b70a2a9d6efa689097e1d1027411cbbf5005b56dcf1902a11d7eeace8\"" May 15 11:53:58.424423 containerd[1508]: time="2025-05-15T11:53:58.424386197Z" level=info msg="CreateContainer within sandbox \"a002c24b70a2a9d6efa689097e1d1027411cbbf5005b56dcf1902a11d7eeace8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 11:53:58.426038 containerd[1508]: time="2025-05-15T11:53:58.425984318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e8bb81286a008fcdac02324f49be49a5a28cacdc1921c94a29c7e2041407dad\"" May 15 11:53:58.429290 containerd[1508]: time="2025-05-15T11:53:58.429237280Z" level=info msg="CreateContainer within sandbox \"7e8bb81286a008fcdac02324f49be49a5a28cacdc1921c94a29c7e2041407dad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 11:53:58.430196 containerd[1508]: time="2025-05-15T11:53:58.430090563Z" level=info msg="Container 890ba83f8681912987a7e482bcf89ab607f8ec8895f1ed2a61c744bc8e0eaf84: CDI devices from CRI Config.CDIDevices: []" May 15 11:53:58.438775 containerd[1508]: time="2025-05-15T11:53:58.438738867Z" level=info msg="Container d68ce7e15ae342101dd024b3fd529612d800beee05fd3dda9077a681d928824b: CDI devices from CRI Config.CDIDevices: []" May 15 11:53:58.440610 containerd[1508]: time="2025-05-15T11:53:58.440578539Z" level=info msg="Container 5d6830123fb0f2de32a98eac99cde54780166436bc25c7c1a72f162dd8f754a9: CDI devices from CRI Config.CDIDevices: []" May 15 11:53:58.440784 containerd[1508]: time="2025-05-15T11:53:58.440751937Z" level=info msg="CreateContainer within sandbox \"71fc8bab26b1595bfc436a3a294053d55d93f4c369e3889ac269995bb003362f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"890ba83f8681912987a7e482bcf89ab607f8ec8895f1ed2a61c744bc8e0eaf84\"" May 15 11:53:58.441419 containerd[1508]: time="2025-05-15T11:53:58.441355475Z" level=info msg="StartContainer for \"890ba83f8681912987a7e482bcf89ab607f8ec8895f1ed2a61c744bc8e0eaf84\"" May 15 11:53:58.442532 containerd[1508]: time="2025-05-15T11:53:58.442460341Z" level=info msg="connecting to shim 890ba83f8681912987a7e482bcf89ab607f8ec8895f1ed2a61c744bc8e0eaf84" address="unix:///run/containerd/s/2ce25e5bbb479ce0f8efb500122cf9570298994299543d7858a261c04b5b339a" protocol=ttrpc version=3 May 15 11:53:58.445082 kubelet[2261]: E0515 11:53:58.445049 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" May 15 11:53:58.447281 containerd[1508]: time="2025-05-15T11:53:58.447233039Z" level=info msg="CreateContainer within sandbox \"a002c24b70a2a9d6efa689097e1d1027411cbbf5005b56dcf1902a11d7eeace8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d68ce7e15ae342101dd024b3fd529612d800beee05fd3dda9077a681d928824b\"" May 15 11:53:58.448224 containerd[1508]: time="2025-05-15T11:53:58.448155713Z" level=info msg="StartContainer for \"d68ce7e15ae342101dd024b3fd529612d800beee05fd3dda9077a681d928824b\"" May 15 11:53:58.450181 containerd[1508]: time="2025-05-15T11:53:58.450074210Z" level=info msg="CreateContainer within sandbox \"7e8bb81286a008fcdac02324f49be49a5a28cacdc1921c94a29c7e2041407dad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d6830123fb0f2de32a98eac99cde54780166436bc25c7c1a72f162dd8f754a9\"" May 15 11:53:58.450583 containerd[1508]: time="2025-05-15T11:53:58.450537885Z" level=info msg="StartContainer for \"5d6830123fb0f2de32a98eac99cde54780166436bc25c7c1a72f162dd8f754a9\"" May 15 11:53:58.451469 containerd[1508]: time="2025-05-15T11:53:58.451426343Z" level=info msg="connecting to shim d68ce7e15ae342101dd024b3fd529612d800beee05fd3dda9077a681d928824b" address="unix:///run/containerd/s/831952f2581b4af05736b22c8d9cb828c2090679f0314b654f032f37b97fe49b" protocol=ttrpc version=3 May 15 11:53:58.451683 containerd[1508]: time="2025-05-15T11:53:58.451537465Z" level=info msg="connecting to shim 5d6830123fb0f2de32a98eac99cde54780166436bc25c7c1a72f162dd8f754a9" address="unix:///run/containerd/s/d5d506c613278774ebe14873c15f689e82bcb97a56d3b1f61db0dd85375df923" protocol=ttrpc version=3 May 15 11:53:58.462695 systemd[1]: Started cri-containerd-890ba83f8681912987a7e482bcf89ab607f8ec8895f1ed2a61c744bc8e0eaf84.scope - libcontainer container 890ba83f8681912987a7e482bcf89ab607f8ec8895f1ed2a61c744bc8e0eaf84. May 15 11:53:58.469469 systemd[1]: Started cri-containerd-5d6830123fb0f2de32a98eac99cde54780166436bc25c7c1a72f162dd8f754a9.scope - libcontainer container 5d6830123fb0f2de32a98eac99cde54780166436bc25c7c1a72f162dd8f754a9. May 15 11:53:58.471299 systemd[1]: Started cri-containerd-d68ce7e15ae342101dd024b3fd529612d800beee05fd3dda9077a681d928824b.scope - libcontainer container d68ce7e15ae342101dd024b3fd529612d800beee05fd3dda9077a681d928824b. May 15 11:53:58.523457 containerd[1508]: time="2025-05-15T11:53:58.523407619Z" level=info msg="StartContainer for \"890ba83f8681912987a7e482bcf89ab607f8ec8895f1ed2a61c744bc8e0eaf84\" returns successfully" May 15 11:53:58.548794 containerd[1508]: time="2025-05-15T11:53:58.548734564Z" level=info msg="StartContainer for \"d68ce7e15ae342101dd024b3fd529612d800beee05fd3dda9077a681d928824b\" returns successfully" May 15 11:53:58.549076 containerd[1508]: time="2025-05-15T11:53:58.549025480Z" level=info msg="StartContainer for \"5d6830123fb0f2de32a98eac99cde54780166436bc25c7c1a72f162dd8f754a9\" returns successfully" May 15 11:53:58.617029 kubelet[2261]: I0515 11:53:58.616760 2261 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 11:53:58.617254 kubelet[2261]: E0515 11:53:58.617222 2261 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" May 15 11:53:58.868423 kubelet[2261]: E0515 11:53:58.868378 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:53:58.870987 kubelet[2261]: E0515 11:53:58.870948 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:53:58.875689 kubelet[2261]: E0515 11:53:58.875674 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:53:59.419112 kubelet[2261]: I0515 11:53:59.419054 2261 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 11:53:59.875511 kubelet[2261]: E0515 11:53:59.875113 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:53:59.875511 kubelet[2261]: E0515 11:53:59.875389 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:54:01.038416 kubelet[2261]: E0515 11:54:01.038358 2261 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 11:54:01.120280 kubelet[2261]: E0515 11:54:01.120127 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 11:54:01.233897 kubelet[2261]: I0515 11:54:01.233542 2261 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 11:54:01.242576 kubelet[2261]: I0515 11:54:01.242536 2261 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 11:54:01.251302 kubelet[2261]: E0515 11:54:01.251269 2261 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 11:54:01.251374 kubelet[2261]: I0515 11:54:01.251314 2261 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 11:54:01.253115 kubelet[2261]: E0515 11:54:01.253089 2261 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 11:54:01.253153 kubelet[2261]: I0515 11:54:01.253115 2261 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 11:54:01.254511 kubelet[2261]: E0515 11:54:01.254482 2261 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 11:54:01.830250 kubelet[2261]: I0515 11:54:01.830206 2261 apiserver.go:52] "Watching apiserver" May 15 11:54:01.842482 kubelet[2261]: I0515 11:54:01.842422 2261 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 11:54:03.357677 systemd[1]: Reload requested from client PID 2530 ('systemctl') (unit session-7.scope)... May 15 11:54:03.357981 systemd[1]: Reloading... May 15 11:54:03.431531 zram_generator::config[2576]: No configuration found. May 15 11:54:03.496276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:54:03.603059 systemd[1]: Reloading finished in 244 ms. May 15 11:54:03.637883 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:54:03.649479 systemd[1]: kubelet.service: Deactivated successfully. May 15 11:54:03.650574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:54:03.650630 systemd[1]: kubelet.service: Consumed 1.181s CPU time, 123.8M memory peak. May 15 11:54:03.652367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:54:03.791906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:54:03.806851 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 11:54:03.841161 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 11:54:03.841161 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 11:54:03.841161 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 11:54:03.841482 kubelet[2615]: I0515 11:54:03.841208 2615 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 11:54:03.848980 kubelet[2615]: I0515 11:54:03.847992 2615 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 11:54:03.848980 kubelet[2615]: I0515 11:54:03.848795 2615 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 11:54:03.849549 kubelet[2615]: I0515 11:54:03.849267 2615 server.go:954] "Client rotation is on, will bootstrap in background" May 15 11:54:03.850982 kubelet[2615]: I0515 11:54:03.850951 2615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 11:54:03.853570 kubelet[2615]: I0515 11:54:03.853532 2615 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 11:54:03.857052 kubelet[2615]: I0515 11:54:03.856997 2615 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 11:54:03.859493 kubelet[2615]: I0515 11:54:03.859468 2615 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 11:54:03.859703 kubelet[2615]: I0515 11:54:03.859678 2615 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 11:54:03.859853 kubelet[2615]: I0515 11:54:03.859706 2615 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 11:54:03.859930 kubelet[2615]: I0515 11:54:03.859862 2615 topology_manager.go:138] "Creating topology manager with none policy" May 15 11:54:03.859930 kubelet[2615]: I0515 11:54:03.859870 2615 container_manager_linux.go:304] "Creating device plugin manager" May 15 11:54:03.859930 kubelet[2615]: I0515 11:54:03.859913 2615 state_mem.go:36] "Initialized new in-memory state store" May 15 11:54:03.860070 kubelet[2615]: I0515 11:54:03.860049 2615 kubelet.go:446] "Attempting to sync node with API server" May 15 11:54:03.860070 kubelet[2615]: I0515 11:54:03.860064 2615 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 11:54:03.860484 kubelet[2615]: I0515 11:54:03.860088 2615 kubelet.go:352] "Adding apiserver pod source" May 15 11:54:03.860484 kubelet[2615]: I0515 11:54:03.860104 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 11:54:03.860836 kubelet[2615]: I0515 11:54:03.860809 2615 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 11:54:03.861529 kubelet[2615]: I0515 11:54:03.861219 2615 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 11:54:03.861667 kubelet[2615]: I0515 11:54:03.861643 2615 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 11:54:03.861697 kubelet[2615]: I0515 11:54:03.861679 2615 server.go:1287] "Started kubelet" May 15 11:54:03.862770 kubelet[2615]: I0515 11:54:03.862723 2615 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 11:54:03.864366 kubelet[2615]: I0515 11:54:03.864332 2615 server.go:490] "Adding debug handlers to kubelet server" May 15 11:54:03.865698 kubelet[2615]: I0515 11:54:03.865647 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 11:54:03.865852 kubelet[2615]: I0515 11:54:03.865830 2615 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 11:54:03.867337 kubelet[2615]: E0515 11:54:03.867294 2615 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 11:54:03.868176 kubelet[2615]: I0515 11:54:03.868143 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 11:54:03.869841 kubelet[2615]: I0515 11:54:03.869810 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 11:54:03.876253 kubelet[2615]: E0515 11:54:03.876224 2615 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 11:54:03.876253 kubelet[2615]: I0515 11:54:03.876261 2615 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 11:54:03.883636 kubelet[2615]: I0515 11:54:03.883538 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 11:54:03.884672 kubelet[2615]: I0515 11:54:03.884655 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 11:54:03.884750 kubelet[2615]: I0515 11:54:03.884741 2615 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 11:54:03.884819 kubelet[2615]: I0515 11:54:03.884810 2615 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 11:54:03.884873 kubelet[2615]: I0515 11:54:03.884864 2615 kubelet.go:2388] "Starting kubelet main sync loop" May 15 11:54:03.884980 kubelet[2615]: E0515 11:54:03.884962 2615 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 11:54:03.888181 kubelet[2615]: I0515 11:54:03.888100 2615 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 11:54:03.888973 kubelet[2615]: I0515 11:54:03.888235 2615 reconciler.go:26] "Reconciler: start to sync state" May 15 11:54:03.892256 kubelet[2615]: I0515 11:54:03.889014 2615 factory.go:221] Registration of the containerd container factory successfully May 15 11:54:03.892256 kubelet[2615]: I0515 11:54:03.889028 2615 factory.go:221] Registration of the systemd container factory successfully May 15 11:54:03.892256 kubelet[2615]: I0515 11:54:03.889270 2615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 11:54:03.918024 kubelet[2615]: I0515 11:54:03.917995 2615 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 11:54:03.918157 kubelet[2615]: I0515 11:54:03.918144 2615 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 11:54:03.918233 kubelet[2615]: I0515 11:54:03.918224 2615 state_mem.go:36] "Initialized new in-memory state store" May 15 11:54:03.918435 kubelet[2615]: I0515 11:54:03.918419 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 11:54:03.918533 kubelet[2615]: I0515 11:54:03.918508 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 11:54:03.918590 kubelet[2615]: I0515 11:54:03.918582 2615 policy_none.go:49] "None policy: Start" May 15 11:54:03.918640 kubelet[2615]: I0515 11:54:03.918632 2615 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 11:54:03.918685 kubelet[2615]: I0515 11:54:03.918678 2615 state_mem.go:35] "Initializing new in-memory state store" May 15 11:54:03.918833 kubelet[2615]: I0515 11:54:03.918822 2615 state_mem.go:75] "Updated machine memory state" May 15 11:54:03.922464 kubelet[2615]: I0515 11:54:03.922419 2615 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 11:54:03.922621 kubelet[2615]: I0515 11:54:03.922590 2615 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 11:54:03.922656 kubelet[2615]: I0515 11:54:03.922609 2615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 11:54:03.923691 kubelet[2615]: I0515 11:54:03.923607 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 11:54:03.924093 kubelet[2615]: E0515 11:54:03.924070 2615 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 11:54:03.986568 kubelet[2615]: I0515 11:54:03.986525 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 11:54:03.986694 kubelet[2615]: I0515 11:54:03.986525 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 11:54:03.987014 kubelet[2615]: I0515 11:54:03.986999 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 11:54:04.024951 kubelet[2615]: I0515 11:54:04.024915 2615 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 11:54:04.030746 kubelet[2615]: I0515 11:54:04.030678 2615 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 11:54:04.030746 kubelet[2615]: I0515 11:54:04.030754 2615 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 11:54:04.089204 kubelet[2615]: I0515 11:54:04.089125 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:54:04.089204 kubelet[2615]: I0515 11:54:04.089168 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:54:04.089204 kubelet[2615]: I0515 11:54:04.089190 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:54:04.089438 kubelet[2615]: I0515 11:54:04.089220 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 11:54:04.089438 kubelet[2615]: I0515 11:54:04.089237 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae61a97e8b0316abc9598916d6b439c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae61a97e8b0316abc9598916d6b439c6\") " pod="kube-system/kube-apiserver-localhost" May 15 11:54:04.089438 kubelet[2615]: I0515 11:54:04.089253 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae61a97e8b0316abc9598916d6b439c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae61a97e8b0316abc9598916d6b439c6\") " pod="kube-system/kube-apiserver-localhost" May 15 11:54:04.089438 kubelet[2615]: I0515 11:54:04.089267 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:54:04.089438 kubelet[2615]: I0515 11:54:04.089282 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 11:54:04.089573 kubelet[2615]: I0515 11:54:04.089298 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae61a97e8b0316abc9598916d6b439c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae61a97e8b0316abc9598916d6b439c6\") " pod="kube-system/kube-apiserver-localhost" May 15 11:54:04.860858 kubelet[2615]: I0515 11:54:04.860828 2615 apiserver.go:52] "Watching apiserver" May 15 11:54:04.888891 kubelet[2615]: I0515 11:54:04.888861 2615 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 11:54:04.906437 kubelet[2615]: I0515 11:54:04.905591 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 11:54:04.906437 kubelet[2615]: I0515 11:54:04.906223 2615 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 11:54:04.916116 kubelet[2615]: E0515 11:54:04.916076 2615 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 11:54:04.917262 kubelet[2615]: E0515 11:54:04.917223 2615 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 11:54:04.962629 kubelet[2615]: I0515 11:54:04.962485 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.962470661 podStartE2EDuration="1.962470661s" podCreationTimestamp="2025-05-15 11:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:54:04.962294603 +0000 UTC m=+1.152556975" watchObservedRunningTime="2025-05-15 11:54:04.962470661 +0000 UTC m=+1.152733033" May 15 11:54:04.962629 kubelet[2615]: I0515 11:54:04.962605 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.962601305 podStartE2EDuration="1.962601305s" podCreationTimestamp="2025-05-15 11:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:54:04.949540544 +0000 UTC m=+1.139802956" watchObservedRunningTime="2025-05-15 11:54:04.962601305 +0000 UTC m=+1.152863677" May 15 11:54:04.986987 kubelet[2615]: I0515 11:54:04.986912 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9868760970000001 podStartE2EDuration="1.986876097s" podCreationTimestamp="2025-05-15 11:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:54:04.972769381 +0000 UTC m=+1.163031753" watchObservedRunningTime="2025-05-15 11:54:04.986876097 +0000 UTC m=+1.177138469" May 15 11:54:08.576809 sudo[1697]: pam_unix(sudo:session): session closed for user root May 15 11:54:08.578157 sshd[1696]: Connection closed by 10.0.0.1 port 36838 May 15 11:54:08.578513 sshd-session[1694]: pam_unix(sshd:session): session closed for user core May 15 11:54:08.586465 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:36838.service: Deactivated successfully. May 15 11:54:08.588628 systemd[1]: session-7.scope: Deactivated successfully. May 15 11:54:08.589593 systemd[1]: session-7.scope: Consumed 5.946s CPU time, 239.1M memory peak. May 15 11:54:08.591378 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. May 15 11:54:08.594126 systemd-logind[1480]: Removed session 7. May 15 11:54:09.182193 kubelet[2615]: I0515 11:54:09.182164 2615 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 11:54:09.182698 kubelet[2615]: I0515 11:54:09.182622 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 11:54:09.182726 containerd[1508]: time="2025-05-15T11:54:09.182454305Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 11:54:10.128436 kubelet[2615]: I0515 11:54:10.128402 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f01334-f879-44e8-864b-1ee0121dbcc5-xtables-lock\") pod \"kube-proxy-tzz76\" (UID: \"e8f01334-f879-44e8-864b-1ee0121dbcc5\") " pod="kube-system/kube-proxy-tzz76" May 15 11:54:10.128750 kubelet[2615]: I0515 11:54:10.128622 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8f01334-f879-44e8-864b-1ee0121dbcc5-lib-modules\") pod \"kube-proxy-tzz76\" (UID: \"e8f01334-f879-44e8-864b-1ee0121dbcc5\") " pod="kube-system/kube-proxy-tzz76" May 15 11:54:10.128750 kubelet[2615]: I0515 11:54:10.128649 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8f01334-f879-44e8-864b-1ee0121dbcc5-kube-proxy\") pod \"kube-proxy-tzz76\" (UID: \"e8f01334-f879-44e8-864b-1ee0121dbcc5\") " pod="kube-system/kube-proxy-tzz76" May 15 11:54:10.128750 kubelet[2615]: I0515 11:54:10.128712 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2sc\" (UniqueName: \"kubernetes.io/projected/e8f01334-f879-44e8-864b-1ee0121dbcc5-kube-api-access-9g2sc\") pod \"kube-proxy-tzz76\" (UID: \"e8f01334-f879-44e8-864b-1ee0121dbcc5\") " pod="kube-system/kube-proxy-tzz76" May 15 11:54:10.130860 systemd[1]: Created slice kubepods-besteffort-pode8f01334_f879_44e8_864b_1ee0121dbcc5.slice - libcontainer container kubepods-besteffort-pode8f01334_f879_44e8_864b_1ee0121dbcc5.slice. May 15 11:54:10.340514 systemd[1]: Created slice kubepods-besteffort-podaf59eeec_45b1_40f9_a0db_f5a9583e349a.slice - libcontainer container kubepods-besteffort-podaf59eeec_45b1_40f9_a0db_f5a9583e349a.slice. May 15 11:54:10.430435 kubelet[2615]: I0515 11:54:10.430306 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af59eeec-45b1-40f9-a0db-f5a9583e349a-var-lib-calico\") pod \"tigera-operator-789496d6f5-fpl9p\" (UID: \"af59eeec-45b1-40f9-a0db-f5a9583e349a\") " pod="tigera-operator/tigera-operator-789496d6f5-fpl9p" May 15 11:54:10.430435 kubelet[2615]: I0515 11:54:10.430374 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt7hh\" (UniqueName: \"kubernetes.io/projected/af59eeec-45b1-40f9-a0db-f5a9583e349a-kube-api-access-wt7hh\") pod \"tigera-operator-789496d6f5-fpl9p\" (UID: \"af59eeec-45b1-40f9-a0db-f5a9583e349a\") " pod="tigera-operator/tigera-operator-789496d6f5-fpl9p" May 15 11:54:10.443993 containerd[1508]: time="2025-05-15T11:54:10.443951783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzz76,Uid:e8f01334-f879-44e8-864b-1ee0121dbcc5,Namespace:kube-system,Attempt:0,}" May 15 11:54:10.459161 containerd[1508]: time="2025-05-15T11:54:10.459058318Z" level=info msg="connecting to shim 1fc28e1bcae96bb6665c0c30216c7fb1eb3b67acbf9273bf9284512657ea6acd" address="unix:///run/containerd/s/1d306f6d718b871aedca58d24c0e14c7fc5267778ec69a0718a4afa7099131ab" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:10.489678 systemd[1]: Started cri-containerd-1fc28e1bcae96bb6665c0c30216c7fb1eb3b67acbf9273bf9284512657ea6acd.scope - libcontainer container 1fc28e1bcae96bb6665c0c30216c7fb1eb3b67acbf9273bf9284512657ea6acd. May 15 11:54:10.512060 containerd[1508]: time="2025-05-15T11:54:10.512025944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzz76,Uid:e8f01334-f879-44e8-864b-1ee0121dbcc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fc28e1bcae96bb6665c0c30216c7fb1eb3b67acbf9273bf9284512657ea6acd\"" May 15 11:54:10.514909 containerd[1508]: time="2025-05-15T11:54:10.514482049Z" level=info msg="CreateContainer within sandbox \"1fc28e1bcae96bb6665c0c30216c7fb1eb3b67acbf9273bf9284512657ea6acd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 11:54:10.527123 containerd[1508]: time="2025-05-15T11:54:10.527088460Z" level=info msg="Container 21ec9743452267ca51ee37455c97c5e033d555d6ced497c61ef5600827a0b6f6: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:10.534124 containerd[1508]: time="2025-05-15T11:54:10.534007271Z" level=info msg="CreateContainer within sandbox \"1fc28e1bcae96bb6665c0c30216c7fb1eb3b67acbf9273bf9284512657ea6acd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"21ec9743452267ca51ee37455c97c5e033d555d6ced497c61ef5600827a0b6f6\"" May 15 11:54:10.535660 containerd[1508]: time="2025-05-15T11:54:10.534649724Z" level=info msg="StartContainer for \"21ec9743452267ca51ee37455c97c5e033d555d6ced497c61ef5600827a0b6f6\"" May 15 11:54:10.537006 containerd[1508]: time="2025-05-15T11:54:10.536978010Z" level=info msg="connecting to shim 21ec9743452267ca51ee37455c97c5e033d555d6ced497c61ef5600827a0b6f6" address="unix:///run/containerd/s/1d306f6d718b871aedca58d24c0e14c7fc5267778ec69a0718a4afa7099131ab" protocol=ttrpc version=3 May 15 11:54:10.561673 systemd[1]: Started cri-containerd-21ec9743452267ca51ee37455c97c5e033d555d6ced497c61ef5600827a0b6f6.scope - libcontainer container 21ec9743452267ca51ee37455c97c5e033d555d6ced497c61ef5600827a0b6f6. May 15 11:54:10.608739 containerd[1508]: time="2025-05-15T11:54:10.608422279Z" level=info msg="StartContainer for \"21ec9743452267ca51ee37455c97c5e033d555d6ced497c61ef5600827a0b6f6\" returns successfully" May 15 11:54:10.647711 containerd[1508]: time="2025-05-15T11:54:10.647429143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-fpl9p,Uid:af59eeec-45b1-40f9-a0db-f5a9583e349a,Namespace:tigera-operator,Attempt:0,}" May 15 11:54:10.664781 containerd[1508]: time="2025-05-15T11:54:10.664743941Z" level=info msg="connecting to shim 25bbde37744f8a1568b987249235d9661cf84bcbc3c50398c719efc44960ba9d" address="unix:///run/containerd/s/48b224be0b0472fd6898e8b7843bc4be918ef9e968ce1aef09e2b1bec87f6376" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:10.691661 systemd[1]: Started cri-containerd-25bbde37744f8a1568b987249235d9661cf84bcbc3c50398c719efc44960ba9d.scope - libcontainer container 25bbde37744f8a1568b987249235d9661cf84bcbc3c50398c719efc44960ba9d. May 15 11:54:10.723750 containerd[1508]: time="2025-05-15T11:54:10.723701862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-fpl9p,Uid:af59eeec-45b1-40f9-a0db-f5a9583e349a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"25bbde37744f8a1568b987249235d9661cf84bcbc3c50398c719efc44960ba9d\"" May 15 11:54:10.726746 containerd[1508]: time="2025-05-15T11:54:10.726638418Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 11:54:12.683830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164236727.mount: Deactivated successfully. May 15 11:54:13.175012 containerd[1508]: time="2025-05-15T11:54:13.174958558Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:13.175631 containerd[1508]: time="2025-05-15T11:54:13.175604318Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 11:54:13.176086 containerd[1508]: time="2025-05-15T11:54:13.176044846Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:13.178259 containerd[1508]: time="2025-05-15T11:54:13.178233214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:13.178986 containerd[1508]: time="2025-05-15T11:54:13.178843269Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.452145879s" May 15 11:54:13.178986 containerd[1508]: time="2025-05-15T11:54:13.178877534Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 11:54:13.186331 containerd[1508]: time="2025-05-15T11:54:13.186256926Z" level=info msg="CreateContainer within sandbox \"25bbde37744f8a1568b987249235d9661cf84bcbc3c50398c719efc44960ba9d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 11:54:13.195797 containerd[1508]: time="2025-05-15T11:54:13.194177322Z" level=info msg="Container 5a3c596ea61511cb67a24c836515149746505708928436ec0a90b123ca5967f3: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:13.200136 containerd[1508]: time="2025-05-15T11:54:13.200086912Z" level=info msg="CreateContainer within sandbox \"25bbde37744f8a1568b987249235d9661cf84bcbc3c50398c719efc44960ba9d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5a3c596ea61511cb67a24c836515149746505708928436ec0a90b123ca5967f3\"" May 15 11:54:13.200810 containerd[1508]: time="2025-05-15T11:54:13.200721516Z" level=info msg="StartContainer for \"5a3c596ea61511cb67a24c836515149746505708928436ec0a90b123ca5967f3\"" May 15 11:54:13.201710 containerd[1508]: time="2025-05-15T11:54:13.201681419Z" level=info msg="connecting to shim 5a3c596ea61511cb67a24c836515149746505708928436ec0a90b123ca5967f3" address="unix:///run/containerd/s/48b224be0b0472fd6898e8b7843bc4be918ef9e968ce1aef09e2b1bec87f6376" protocol=ttrpc version=3 May 15 11:54:13.260673 systemd[1]: Started cri-containerd-5a3c596ea61511cb67a24c836515149746505708928436ec0a90b123ca5967f3.scope - libcontainer container 5a3c596ea61511cb67a24c836515149746505708928436ec0a90b123ca5967f3. May 15 11:54:13.284550 containerd[1508]: time="2025-05-15T11:54:13.284514522Z" level=info msg="StartContainer for \"5a3c596ea61511cb67a24c836515149746505708928436ec0a90b123ca5967f3\" returns successfully" May 15 11:54:13.933448 kubelet[2615]: I0515 11:54:13.933228 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tzz76" podStartSLOduration=3.933210659 podStartE2EDuration="3.933210659s" podCreationTimestamp="2025-05-15 11:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:54:10.9295241 +0000 UTC m=+7.119786512" watchObservedRunningTime="2025-05-15 11:54:13.933210659 +0000 UTC m=+10.123473031" May 15 11:54:13.934648 kubelet[2615]: I0515 11:54:13.933684 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-fpl9p" podStartSLOduration=1.474596687 podStartE2EDuration="3.933673857s" podCreationTimestamp="2025-05-15 11:54:10 +0000 UTC" firstStartedPulling="2025-05-15 11:54:10.725402129 +0000 UTC m=+6.915664501" lastFinishedPulling="2025-05-15 11:54:13.184479299 +0000 UTC m=+9.374741671" observedRunningTime="2025-05-15 11:54:13.933044291 +0000 UTC m=+10.123306663" watchObservedRunningTime="2025-05-15 11:54:13.933673857 +0000 UTC m=+10.123936269" May 15 11:54:17.249214 systemd[1]: Created slice kubepods-besteffort-pod1bf4d2fb_6e8c_4f82_883a_028e2dbd8e61.slice - libcontainer container kubepods-besteffort-pod1bf4d2fb_6e8c_4f82_883a_028e2dbd8e61.slice. May 15 11:54:17.276383 kubelet[2615]: I0515 11:54:17.276324 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf4d2fb-6e8c-4f82-883a-028e2dbd8e61-tigera-ca-bundle\") pod \"calico-typha-597ffc4dc7-j6v6g\" (UID: \"1bf4d2fb-6e8c-4f82-883a-028e2dbd8e61\") " pod="calico-system/calico-typha-597ffc4dc7-j6v6g" May 15 11:54:17.276753 kubelet[2615]: I0515 11:54:17.276440 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962p4\" (UniqueName: \"kubernetes.io/projected/1bf4d2fb-6e8c-4f82-883a-028e2dbd8e61-kube-api-access-962p4\") pod \"calico-typha-597ffc4dc7-j6v6g\" (UID: \"1bf4d2fb-6e8c-4f82-883a-028e2dbd8e61\") " pod="calico-system/calico-typha-597ffc4dc7-j6v6g" May 15 11:54:17.276753 kubelet[2615]: I0515 11:54:17.276467 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1bf4d2fb-6e8c-4f82-883a-028e2dbd8e61-typha-certs\") pod \"calico-typha-597ffc4dc7-j6v6g\" (UID: \"1bf4d2fb-6e8c-4f82-883a-028e2dbd8e61\") " pod="calico-system/calico-typha-597ffc4dc7-j6v6g" May 15 11:54:17.327263 systemd[1]: Created slice kubepods-besteffort-pod000a4bb8_1599_4999_aedc_73ac6dfbc4b7.slice - libcontainer container kubepods-besteffort-pod000a4bb8_1599_4999_aedc_73ac6dfbc4b7.slice. May 15 11:54:17.377543 kubelet[2615]: I0515 11:54:17.377262 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-cni-net-dir\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377543 kubelet[2615]: I0515 11:54:17.377321 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-xtables-lock\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377543 kubelet[2615]: I0515 11:54:17.377338 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-cni-bin-dir\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377543 kubelet[2615]: I0515 11:54:17.377354 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjvqx\" (UniqueName: \"kubernetes.io/projected/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-kube-api-access-zjvqx\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377543 kubelet[2615]: I0515 11:54:17.377401 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-lib-modules\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377765 kubelet[2615]: I0515 11:54:17.377446 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-node-certs\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377765 kubelet[2615]: I0515 11:54:17.377611 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-policysync\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377765 kubelet[2615]: I0515 11:54:17.377648 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-flexvol-driver-host\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377765 kubelet[2615]: I0515 11:54:17.377687 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-var-lib-calico\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377765 kubelet[2615]: I0515 11:54:17.377711 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-var-run-calico\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377877 kubelet[2615]: I0515 11:54:17.377729 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-tigera-ca-bundle\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.377877 kubelet[2615]: I0515 11:54:17.377744 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/000a4bb8-1599-4999-aedc-73ac6dfbc4b7-cni-log-dir\") pod \"calico-node-5g8sh\" (UID: \"000a4bb8-1599-4999-aedc-73ac6dfbc4b7\") " pod="calico-system/calico-node-5g8sh" May 15 11:54:17.431001 kubelet[2615]: E0515 11:54:17.430611 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62nn" podUID="dbc4a777-ce62-4800-8ac3-f6b4ef044eab" May 15 11:54:17.479508 kubelet[2615]: I0515 11:54:17.478455 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbc4a777-ce62-4800-8ac3-f6b4ef044eab-kubelet-dir\") pod \"csi-node-driver-x62nn\" (UID: \"dbc4a777-ce62-4800-8ac3-f6b4ef044eab\") " pod="calico-system/csi-node-driver-x62nn" May 15 11:54:17.479508 kubelet[2615]: I0515 11:54:17.478567 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dbc4a777-ce62-4800-8ac3-f6b4ef044eab-registration-dir\") pod \"csi-node-driver-x62nn\" (UID: \"dbc4a777-ce62-4800-8ac3-f6b4ef044eab\") " pod="calico-system/csi-node-driver-x62nn" May 15 11:54:17.479508 kubelet[2615]: I0515 11:54:17.478588 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26qn\" (UniqueName: \"kubernetes.io/projected/dbc4a777-ce62-4800-8ac3-f6b4ef044eab-kube-api-access-n26qn\") pod \"csi-node-driver-x62nn\" (UID: \"dbc4a777-ce62-4800-8ac3-f6b4ef044eab\") " pod="calico-system/csi-node-driver-x62nn" May 15 11:54:17.479508 kubelet[2615]: I0515 11:54:17.478606 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dbc4a777-ce62-4800-8ac3-f6b4ef044eab-varrun\") pod \"csi-node-driver-x62nn\" (UID: \"dbc4a777-ce62-4800-8ac3-f6b4ef044eab\") " pod="calico-system/csi-node-driver-x62nn" May 15 11:54:17.479508 kubelet[2615]: I0515 11:54:17.478633 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dbc4a777-ce62-4800-8ac3-f6b4ef044eab-socket-dir\") pod \"csi-node-driver-x62nn\" (UID: \"dbc4a777-ce62-4800-8ac3-f6b4ef044eab\") " pod="calico-system/csi-node-driver-x62nn" May 15 11:54:17.486080 kubelet[2615]: E0515 11:54:17.486054 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.486206 kubelet[2615]: W0515 11:54:17.486191 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.486300 kubelet[2615]: E0515 11:54:17.486286 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.490694 kubelet[2615]: E0515 11:54:17.490657 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.490694 kubelet[2615]: W0515 11:54:17.490680 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.490816 kubelet[2615]: E0515 11:54:17.490697 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.556147 containerd[1508]: time="2025-05-15T11:54:17.556029796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-597ffc4dc7-j6v6g,Uid:1bf4d2fb-6e8c-4f82-883a-028e2dbd8e61,Namespace:calico-system,Attempt:0,}" May 15 11:54:17.579533 kubelet[2615]: E0515 11:54:17.579477 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.579533 kubelet[2615]: W0515 11:54:17.579517 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.579533 kubelet[2615]: E0515 11:54:17.579537 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.579775 kubelet[2615]: E0515 11:54:17.579748 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.579775 kubelet[2615]: W0515 11:54:17.579762 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.579831 kubelet[2615]: E0515 11:54:17.579781 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.580012 kubelet[2615]: E0515 11:54:17.579988 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.580012 kubelet[2615]: W0515 11:54:17.580001 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.580072 kubelet[2615]: E0515 11:54:17.580020 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.580185 kubelet[2615]: E0515 11:54:17.580172 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.580185 kubelet[2615]: W0515 11:54:17.580182 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.580234 kubelet[2615]: E0515 11:54:17.580194 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.580365 kubelet[2615]: E0515 11:54:17.580342 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.580365 kubelet[2615]: W0515 11:54:17.580355 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.580417 kubelet[2615]: E0515 11:54:17.580370 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.580558 kubelet[2615]: E0515 11:54:17.580546 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.580588 kubelet[2615]: W0515 11:54:17.580557 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.580588 kubelet[2615]: E0515 11:54:17.580569 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.580735 kubelet[2615]: E0515 11:54:17.580723 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.580735 kubelet[2615]: W0515 11:54:17.580733 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.580790 kubelet[2615]: E0515 11:54:17.580749 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.580885 kubelet[2615]: E0515 11:54:17.580873 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.580885 kubelet[2615]: W0515 11:54:17.580883 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.580957 kubelet[2615]: E0515 11:54:17.580934 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.581084 kubelet[2615]: E0515 11:54:17.581069 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.581084 kubelet[2615]: W0515 11:54:17.581082 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.581143 kubelet[2615]: E0515 11:54:17.581099 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.581233 kubelet[2615]: E0515 11:54:17.581220 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.581233 kubelet[2615]: W0515 11:54:17.581231 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.581281 kubelet[2615]: E0515 11:54:17.581250 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.581376 kubelet[2615]: E0515 11:54:17.581364 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.581376 kubelet[2615]: W0515 11:54:17.581374 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.581424 kubelet[2615]: E0515 11:54:17.581392 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.581534 kubelet[2615]: E0515 11:54:17.581523 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.581534 kubelet[2615]: W0515 11:54:17.581533 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.581586 kubelet[2615]: E0515 11:54:17.581551 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.581724 kubelet[2615]: E0515 11:54:17.581692 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.581752 kubelet[2615]: W0515 11:54:17.581725 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.581752 kubelet[2615]: E0515 11:54:17.581739 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.581962 kubelet[2615]: E0515 11:54:17.581947 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.581991 kubelet[2615]: W0515 11:54:17.581961 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.582033 kubelet[2615]: E0515 11:54:17.582000 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.582234 kubelet[2615]: E0515 11:54:17.582218 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.582234 kubelet[2615]: W0515 11:54:17.582233 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.582286 kubelet[2615]: E0515 11:54:17.582277 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.582482 kubelet[2615]: E0515 11:54:17.582467 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.582482 kubelet[2615]: W0515 11:54:17.582480 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.582554 kubelet[2615]: E0515 11:54:17.582520 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.583364 kubelet[2615]: E0515 11:54:17.583341 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.583364 kubelet[2615]: W0515 11:54:17.583359 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.583486 kubelet[2615]: E0515 11:54:17.583423 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.583678 kubelet[2615]: E0515 11:54:17.583662 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.583678 kubelet[2615]: W0515 11:54:17.583676 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.583744 kubelet[2615]: E0515 11:54:17.583734 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.583855 kubelet[2615]: E0515 11:54:17.583834 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.583855 kubelet[2615]: W0515 11:54:17.583848 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.584000 kubelet[2615]: E0515 11:54:17.583882 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.584429 kubelet[2615]: E0515 11:54:17.584409 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.584429 kubelet[2615]: W0515 11:54:17.584427 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.584548 kubelet[2615]: E0515 11:54:17.584525 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.584731 kubelet[2615]: E0515 11:54:17.584714 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.584793 kubelet[2615]: W0515 11:54:17.584730 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.584793 kubelet[2615]: E0515 11:54:17.584747 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.584994 kubelet[2615]: E0515 11:54:17.584979 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.585044 kubelet[2615]: W0515 11:54:17.584992 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.585044 kubelet[2615]: E0515 11:54:17.585035 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.585321 kubelet[2615]: E0515 11:54:17.585268 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.585321 kubelet[2615]: W0515 11:54:17.585312 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.585381 kubelet[2615]: E0515 11:54:17.585328 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.585598 kubelet[2615]: E0515 11:54:17.585582 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.585652 kubelet[2615]: W0515 11:54:17.585597 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.585652 kubelet[2615]: E0515 11:54:17.585620 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.585880 kubelet[2615]: E0515 11:54:17.585864 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.585880 kubelet[2615]: W0515 11:54:17.585878 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.585950 kubelet[2615]: E0515 11:54:17.585888 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.592619 containerd[1508]: time="2025-05-15T11:54:17.592569044Z" level=info msg="connecting to shim 44c5d76aa41e4ebea7d942a27035889e89012d8dddbaabfc4b8f2cbb4e8d42c0" address="unix:///run/containerd/s/f7193d48ad41d8c31f84c479818a0df98a71a689961658848034517531d15d8e" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:17.595075 kubelet[2615]: E0515 11:54:17.595051 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:54:17.595075 kubelet[2615]: W0515 11:54:17.595075 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:54:17.595174 kubelet[2615]: E0515 11:54:17.595093 2615 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:54:17.629677 systemd[1]: Started cri-containerd-44c5d76aa41e4ebea7d942a27035889e89012d8dddbaabfc4b8f2cbb4e8d42c0.scope - libcontainer container 44c5d76aa41e4ebea7d942a27035889e89012d8dddbaabfc4b8f2cbb4e8d42c0. May 15 11:54:17.633749 containerd[1508]: time="2025-05-15T11:54:17.633698974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5g8sh,Uid:000a4bb8-1599-4999-aedc-73ac6dfbc4b7,Namespace:calico-system,Attempt:0,}" May 15 11:54:17.654522 containerd[1508]: time="2025-05-15T11:54:17.654452946Z" level=info msg="connecting to shim ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950" address="unix:///run/containerd/s/efafd48b6811a7650ebc891d902db3dcbee49adb79a9f22f6e333a902e1d0cf0" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:17.684687 systemd[1]: Started cri-containerd-ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950.scope - libcontainer container ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950. May 15 11:54:17.711657 containerd[1508]: time="2025-05-15T11:54:17.711484147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-597ffc4dc7-j6v6g,Uid:1bf4d2fb-6e8c-4f82-883a-028e2dbd8e61,Namespace:calico-system,Attempt:0,} returns sandbox id \"44c5d76aa41e4ebea7d942a27035889e89012d8dddbaabfc4b8f2cbb4e8d42c0\"" May 15 11:54:17.712817 containerd[1508]: time="2025-05-15T11:54:17.712792166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5g8sh,Uid:000a4bb8-1599-4999-aedc-73ac6dfbc4b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950\"" May 15 11:54:17.719380 containerd[1508]: time="2025-05-15T11:54:17.719344856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 11:54:18.406324 update_engine[1481]: I20250515 11:54:18.406250 1481 update_attempter.cc:509] Updating boot flags... May 15 11:54:18.886147 kubelet[2615]: E0515 11:54:18.886063 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62nn" podUID="dbc4a777-ce62-4800-8ac3-f6b4ef044eab" May 15 11:54:18.915950 containerd[1508]: time="2025-05-15T11:54:18.915897049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:18.916983 containerd[1508]: time="2025-05-15T11:54:18.916942941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 11:54:18.917908 containerd[1508]: time="2025-05-15T11:54:18.917878674Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:18.920440 containerd[1508]: time="2025-05-15T11:54:18.920392741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:18.921018 containerd[1508]: time="2025-05-15T11:54:18.920981962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.201596801s" May 15 11:54:18.921052 containerd[1508]: time="2025-05-15T11:54:18.921018669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 11:54:18.925512 containerd[1508]: time="2025-05-15T11:54:18.922506837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 11:54:18.925512 containerd[1508]: time="2025-05-15T11:54:18.924130434Z" level=info msg="CreateContainer within sandbox \"ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 11:54:18.943517 containerd[1508]: time="2025-05-15T11:54:18.942722657Z" level=info msg="Container 781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:18.950543 containerd[1508]: time="2025-05-15T11:54:18.950470143Z" level=info msg="CreateContainer within sandbox \"ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4\"" May 15 11:54:18.952509 containerd[1508]: time="2025-05-15T11:54:18.952435734Z" level=info msg="StartContainer for \"781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4\"" May 15 11:54:18.954389 containerd[1508]: time="2025-05-15T11:54:18.954339708Z" level=info msg="connecting to shim 781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4" address="unix:///run/containerd/s/efafd48b6811a7650ebc891d902db3dcbee49adb79a9f22f6e333a902e1d0cf0" protocol=ttrpc version=3 May 15 11:54:18.985706 systemd[1]: Started cri-containerd-781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4.scope - libcontainer container 781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4. May 15 11:54:19.026082 containerd[1508]: time="2025-05-15T11:54:19.026033676Z" level=info msg="StartContainer for \"781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4\" returns successfully" May 15 11:54:19.070875 systemd[1]: cri-containerd-781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4.scope: Deactivated successfully. May 15 11:54:19.071154 systemd[1]: cri-containerd-781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4.scope: Consumed 65ms CPU time, 8.1M memory peak, 6.2M written to disk. May 15 11:54:19.090960 containerd[1508]: time="2025-05-15T11:54:19.090906763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4\" id:\"781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4\" pid:3168 exited_at:{seconds:1747310059 nanos:80869290}" May 15 11:54:19.092471 containerd[1508]: time="2025-05-15T11:54:19.092423138Z" level=info msg="received exit event container_id:\"781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4\" id:\"781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4\" pid:3168 exited_at:{seconds:1747310059 nanos:80869290}" May 15 11:54:19.136619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-781b8a83333a57f295c17933b66c7890619ae9d55ee11cd1f7ee9e8e69b06fb4-rootfs.mount: Deactivated successfully. May 15 11:54:20.885641 kubelet[2615]: E0515 11:54:20.885594 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62nn" podUID="dbc4a777-ce62-4800-8ac3-f6b4ef044eab" May 15 11:54:21.503908 containerd[1508]: time="2025-05-15T11:54:21.503802114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:21.504665 containerd[1508]: time="2025-05-15T11:54:21.504604323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 11:54:21.505575 containerd[1508]: time="2025-05-15T11:54:21.505544646Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:21.507674 containerd[1508]: time="2025-05-15T11:54:21.507636900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:21.508397 containerd[1508]: time="2025-05-15T11:54:21.508360896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.585819472s" May 15 11:54:21.508560 containerd[1508]: time="2025-05-15T11:54:21.508509486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 11:54:21.510545 containerd[1508]: time="2025-05-15T11:54:21.509703843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 11:54:21.527718 containerd[1508]: time="2025-05-15T11:54:21.527655109Z" level=info msg="CreateContainer within sandbox \"44c5d76aa41e4ebea7d942a27035889e89012d8dddbaabfc4b8f2cbb4e8d42c0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 11:54:21.541604 containerd[1508]: time="2025-05-15T11:54:21.541548903Z" level=info msg="Container 84319748ba0768e23c516a3630913f0efd731a22ade36dc465e0982f170e60fc: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:21.549766 containerd[1508]: time="2025-05-15T11:54:21.549704913Z" level=info msg="CreateContainer within sandbox \"44c5d76aa41e4ebea7d942a27035889e89012d8dddbaabfc4b8f2cbb4e8d42c0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"84319748ba0768e23c516a3630913f0efd731a22ade36dc465e0982f170e60fc\"" May 15 11:54:21.552863 containerd[1508]: time="2025-05-15T11:54:21.552820462Z" level=info msg="StartContainer for \"84319748ba0768e23c516a3630913f0efd731a22ade36dc465e0982f170e60fc\"" May 15 11:54:21.554034 containerd[1508]: time="2025-05-15T11:54:21.554001344Z" level=info msg="connecting to shim 84319748ba0768e23c516a3630913f0efd731a22ade36dc465e0982f170e60fc" address="unix:///run/containerd/s/f7193d48ad41d8c31f84c479818a0df98a71a689961658848034517531d15d8e" protocol=ttrpc version=3 May 15 11:54:21.582790 systemd[1]: Started cri-containerd-84319748ba0768e23c516a3630913f0efd731a22ade36dc465e0982f170e60fc.scope - libcontainer container 84319748ba0768e23c516a3630913f0efd731a22ade36dc465e0982f170e60fc. May 15 11:54:21.633698 containerd[1508]: time="2025-05-15T11:54:21.633656920Z" level=info msg="StartContainer for \"84319748ba0768e23c516a3630913f0efd731a22ade36dc465e0982f170e60fc\" returns successfully" May 15 11:54:21.958458 kubelet[2615]: I0515 11:54:21.957968 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-597ffc4dc7-j6v6g" podStartSLOduration=1.165371154 podStartE2EDuration="4.95795291s" podCreationTimestamp="2025-05-15 11:54:17 +0000 UTC" firstStartedPulling="2025-05-15 11:54:17.716709866 +0000 UTC m=+13.906972238" lastFinishedPulling="2025-05-15 11:54:21.509291622 +0000 UTC m=+17.699553994" observedRunningTime="2025-05-15 11:54:21.956396875 +0000 UTC m=+18.146659247" watchObservedRunningTime="2025-05-15 11:54:21.95795291 +0000 UTC m=+18.148215282" May 15 11:54:22.886033 kubelet[2615]: E0515 11:54:22.885980 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62nn" podUID="dbc4a777-ce62-4800-8ac3-f6b4ef044eab" May 15 11:54:23.849530 kernel: hrtimer: interrupt took 1917433 ns May 15 11:54:24.886156 kubelet[2615]: E0515 11:54:24.886097 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62nn" podUID="dbc4a777-ce62-4800-8ac3-f6b4ef044eab" May 15 11:54:25.649663 containerd[1508]: time="2025-05-15T11:54:25.649465680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:25.651053 containerd[1508]: time="2025-05-15T11:54:25.650198863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 11:54:25.651725 containerd[1508]: time="2025-05-15T11:54:25.651658549Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:25.654067 containerd[1508]: time="2025-05-15T11:54:25.654026686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:25.654760 containerd[1508]: time="2025-05-15T11:54:25.654678252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.144931662s" May 15 11:54:25.654960 containerd[1508]: time="2025-05-15T11:54:25.654709083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 11:54:25.658440 containerd[1508]: time="2025-05-15T11:54:25.658311253Z" level=info msg="CreateContainer within sandbox \"ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 11:54:25.669701 containerd[1508]: time="2025-05-15T11:54:25.669661482Z" level=info msg="Container 8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:25.672987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355063567.mount: Deactivated successfully. May 15 11:54:25.678915 containerd[1508]: time="2025-05-15T11:54:25.678870267Z" level=info msg="CreateContainer within sandbox \"ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc\"" May 15 11:54:25.679573 containerd[1508]: time="2025-05-15T11:54:25.679520674Z" level=info msg="StartContainer for \"8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc\"" May 15 11:54:25.681164 containerd[1508]: time="2025-05-15T11:54:25.681130995Z" level=info msg="connecting to shim 8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc" address="unix:///run/containerd/s/efafd48b6811a7650ebc891d902db3dcbee49adb79a9f22f6e333a902e1d0cf0" protocol=ttrpc version=3 May 15 11:54:25.712666 systemd[1]: Started cri-containerd-8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc.scope - libcontainer container 8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc. May 15 11:54:25.810965 containerd[1508]: time="2025-05-15T11:54:25.810926004Z" level=info msg="StartContainer for \"8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc\" returns successfully" May 15 11:54:26.293025 systemd[1]: cri-containerd-8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc.scope: Deactivated successfully. May 15 11:54:26.293291 systemd[1]: cri-containerd-8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc.scope: Consumed 452ms CPU time, 159.1M memory peak, 52K read from disk, 150.3M written to disk. May 15 11:54:26.302985 containerd[1508]: time="2025-05-15T11:54:26.302914272Z" level=info msg="received exit event container_id:\"8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc\" id:\"8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc\" pid:3278 exited_at:{seconds:1747310066 nanos:295630687}" May 15 11:54:26.308188 containerd[1508]: time="2025-05-15T11:54:26.308145647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc\" id:\"8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc\" pid:3278 exited_at:{seconds:1747310066 nanos:295630687}" May 15 11:54:26.332787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8aef7f531514c5687c9936e13d7bf9300477d26b8c4bc4e0450e31e1c85b22bc-rootfs.mount: Deactivated successfully. May 15 11:54:26.351094 kubelet[2615]: I0515 11:54:26.351030 2615 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 11:54:26.439744 systemd[1]: Created slice kubepods-burstable-poda9aaee40_fbce_4f87_980b_c2aa9858f7e4.slice - libcontainer container kubepods-burstable-poda9aaee40_fbce_4f87_980b_c2aa9858f7e4.slice. May 15 11:54:26.454997 systemd[1]: Created slice kubepods-besteffort-pod4ae821ff_2451_4dd4_b829_888ac44a3d95.slice - libcontainer container kubepods-besteffort-pod4ae821ff_2451_4dd4_b829_888ac44a3d95.slice. May 15 11:54:26.464397 systemd[1]: Created slice kubepods-burstable-podb2638139_58aa_4f7c_a7ba_8ff847602cba.slice - libcontainer container kubepods-burstable-podb2638139_58aa_4f7c_a7ba_8ff847602cba.slice. May 15 11:54:26.472160 systemd[1]: Created slice kubepods-besteffort-pod73e051ec_f022_4ded_b603_a96eab642f9c.slice - libcontainer container kubepods-besteffort-pod73e051ec_f022_4ded_b603_a96eab642f9c.slice. May 15 11:54:26.479396 systemd[1]: Created slice kubepods-besteffort-pod12f63fcf_8e10_4714_8b17_fc2a1f973263.slice - libcontainer container kubepods-besteffort-pod12f63fcf_8e10_4714_8b17_fc2a1f973263.slice. May 15 11:54:26.549924 kubelet[2615]: I0515 11:54:26.549815 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dszh5\" (UniqueName: \"kubernetes.io/projected/b2638139-58aa-4f7c-a7ba-8ff847602cba-kube-api-access-dszh5\") pod \"coredns-668d6bf9bc-pvl6d\" (UID: \"b2638139-58aa-4f7c-a7ba-8ff847602cba\") " pod="kube-system/coredns-668d6bf9bc-pvl6d" May 15 11:54:26.549924 kubelet[2615]: I0515 11:54:26.549857 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9aaee40-fbce-4f87-980b-c2aa9858f7e4-config-volume\") pod \"coredns-668d6bf9bc-k75xk\" (UID: \"a9aaee40-fbce-4f87-980b-c2aa9858f7e4\") " pod="kube-system/coredns-668d6bf9bc-k75xk" May 15 11:54:26.549924 kubelet[2615]: I0515 11:54:26.549876 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2638139-58aa-4f7c-a7ba-8ff847602cba-config-volume\") pod \"coredns-668d6bf9bc-pvl6d\" (UID: \"b2638139-58aa-4f7c-a7ba-8ff847602cba\") " pod="kube-system/coredns-668d6bf9bc-pvl6d" May 15 11:54:26.549924 kubelet[2615]: I0515 11:54:26.549897 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12f63fcf-8e10-4714-8b17-fc2a1f973263-calico-apiserver-certs\") pod \"calico-apiserver-76c5478b6-8djln\" (UID: \"12f63fcf-8e10-4714-8b17-fc2a1f973263\") " pod="calico-apiserver/calico-apiserver-76c5478b6-8djln" May 15 11:54:26.549924 kubelet[2615]: I0515 11:54:26.549915 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbbrs\" (UniqueName: \"kubernetes.io/projected/73e051ec-f022-4ded-b603-a96eab642f9c-kube-api-access-dbbrs\") pod \"calico-apiserver-76c5478b6-4kpt7\" (UID: \"73e051ec-f022-4ded-b603-a96eab642f9c\") " pod="calico-apiserver/calico-apiserver-76c5478b6-4kpt7" May 15 11:54:26.550125 kubelet[2615]: I0515 11:54:26.549933 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/a9aaee40-fbce-4f87-980b-c2aa9858f7e4-kube-api-access-jtzlv\") pod \"coredns-668d6bf9bc-k75xk\" (UID: \"a9aaee40-fbce-4f87-980b-c2aa9858f7e4\") " pod="kube-system/coredns-668d6bf9bc-k75xk" May 15 11:54:26.550125 kubelet[2615]: I0515 11:54:26.549950 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ae821ff-2451-4dd4-b829-888ac44a3d95-tigera-ca-bundle\") pod \"calico-kube-controllers-675675fd54-pt9h5\" (UID: \"4ae821ff-2451-4dd4-b829-888ac44a3d95\") " pod="calico-system/calico-kube-controllers-675675fd54-pt9h5" May 15 11:54:26.550125 kubelet[2615]: I0515 11:54:26.549965 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzqk\" (UniqueName: \"kubernetes.io/projected/4ae821ff-2451-4dd4-b829-888ac44a3d95-kube-api-access-7kzqk\") pod \"calico-kube-controllers-675675fd54-pt9h5\" (UID: \"4ae821ff-2451-4dd4-b829-888ac44a3d95\") " pod="calico-system/calico-kube-controllers-675675fd54-pt9h5" May 15 11:54:26.550125 kubelet[2615]: I0515 11:54:26.549980 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ltxh\" (UniqueName: \"kubernetes.io/projected/12f63fcf-8e10-4714-8b17-fc2a1f973263-kube-api-access-5ltxh\") pod \"calico-apiserver-76c5478b6-8djln\" (UID: \"12f63fcf-8e10-4714-8b17-fc2a1f973263\") " pod="calico-apiserver/calico-apiserver-76c5478b6-8djln" May 15 11:54:26.550125 kubelet[2615]: I0515 11:54:26.550017 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73e051ec-f022-4ded-b603-a96eab642f9c-calico-apiserver-certs\") pod \"calico-apiserver-76c5478b6-4kpt7\" (UID: \"73e051ec-f022-4ded-b603-a96eab642f9c\") " pod="calico-apiserver/calico-apiserver-76c5478b6-4kpt7" May 15 11:54:26.751975 containerd[1508]: time="2025-05-15T11:54:26.751935113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k75xk,Uid:a9aaee40-fbce-4f87-980b-c2aa9858f7e4,Namespace:kube-system,Attempt:0,}" May 15 11:54:26.761521 containerd[1508]: time="2025-05-15T11:54:26.760725903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675675fd54-pt9h5,Uid:4ae821ff-2451-4dd4-b829-888ac44a3d95,Namespace:calico-system,Attempt:0,}" May 15 11:54:26.772296 containerd[1508]: time="2025-05-15T11:54:26.772236831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvl6d,Uid:b2638139-58aa-4f7c-a7ba-8ff847602cba,Namespace:kube-system,Attempt:0,}" May 15 11:54:26.796189 containerd[1508]: time="2025-05-15T11:54:26.787446415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c5478b6-8djln,Uid:12f63fcf-8e10-4714-8b17-fc2a1f973263,Namespace:calico-apiserver,Attempt:0,}" May 15 11:54:26.796373 containerd[1508]: time="2025-05-15T11:54:26.796357931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c5478b6-4kpt7,Uid:73e051ec-f022-4ded-b603-a96eab642f9c,Namespace:calico-apiserver,Attempt:0,}" May 15 11:54:26.901873 systemd[1]: Created slice kubepods-besteffort-poddbc4a777_ce62_4800_8ac3_f6b4ef044eab.slice - libcontainer container kubepods-besteffort-poddbc4a777_ce62_4800_8ac3_f6b4ef044eab.slice. May 15 11:54:26.913530 containerd[1508]: time="2025-05-15T11:54:26.913485949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x62nn,Uid:dbc4a777-ce62-4800-8ac3-f6b4ef044eab,Namespace:calico-system,Attempt:0,}" May 15 11:54:26.970373 containerd[1508]: time="2025-05-15T11:54:26.968380673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 11:54:27.223841 containerd[1508]: time="2025-05-15T11:54:27.223661328Z" level=error msg="Failed to destroy network for sandbox \"aafb098ac90e5509969f972883d6043efa40fdb3a83765a6df3f567fab1af75c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.232403 containerd[1508]: time="2025-05-15T11:54:27.232355624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x62nn,Uid:dbc4a777-ce62-4800-8ac3-f6b4ef044eab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafb098ac90e5509969f972883d6043efa40fdb3a83765a6df3f567fab1af75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.233282 containerd[1508]: time="2025-05-15T11:54:27.232963135Z" level=error msg="Failed to destroy network for sandbox \"28e74e6e28a4cfe0e8e01342e796bcfc741ccb032b97092c3aa17890b6d231ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.233442 kubelet[2615]: E0515 11:54:27.233371 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafb098ac90e5509969f972883d6043efa40fdb3a83765a6df3f567fab1af75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.234254 containerd[1508]: time="2025-05-15T11:54:27.234211947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c5478b6-4kpt7,Uid:73e051ec-f022-4ded-b603-a96eab642f9c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28e74e6e28a4cfe0e8e01342e796bcfc741ccb032b97092c3aa17890b6d231ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.234737 kubelet[2615]: E0515 11:54:27.234400 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28e74e6e28a4cfe0e8e01342e796bcfc741ccb032b97092c3aa17890b6d231ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.234737 kubelet[2615]: E0515 11:54:27.234440 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28e74e6e28a4cfe0e8e01342e796bcfc741ccb032b97092c3aa17890b6d231ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c5478b6-4kpt7" May 15 11:54:27.234737 kubelet[2615]: E0515 11:54:27.234459 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28e74e6e28a4cfe0e8e01342e796bcfc741ccb032b97092c3aa17890b6d231ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c5478b6-4kpt7" May 15 11:54:27.234825 kubelet[2615]: E0515 11:54:27.234592 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c5478b6-4kpt7_calico-apiserver(73e051ec-f022-4ded-b603-a96eab642f9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c5478b6-4kpt7_calico-apiserver(73e051ec-f022-4ded-b603-a96eab642f9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28e74e6e28a4cfe0e8e01342e796bcfc741ccb032b97092c3aa17890b6d231ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c5478b6-4kpt7" podUID="73e051ec-f022-4ded-b603-a96eab642f9c" May 15 11:54:27.234825 kubelet[2615]: E0515 11:54:27.234676 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafb098ac90e5509969f972883d6043efa40fdb3a83765a6df3f567fab1af75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x62nn" May 15 11:54:27.234825 kubelet[2615]: E0515 11:54:27.234707 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafb098ac90e5509969f972883d6043efa40fdb3a83765a6df3f567fab1af75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x62nn" May 15 11:54:27.234916 kubelet[2615]: E0515 11:54:27.234741 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x62nn_calico-system(dbc4a777-ce62-4800-8ac3-f6b4ef044eab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x62nn_calico-system(dbc4a777-ce62-4800-8ac3-f6b4ef044eab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aafb098ac90e5509969f972883d6043efa40fdb3a83765a6df3f567fab1af75c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x62nn" podUID="dbc4a777-ce62-4800-8ac3-f6b4ef044eab" May 15 11:54:27.238943 containerd[1508]: time="2025-05-15T11:54:27.238898921Z" level=error msg="Failed to destroy network for sandbox \"1a9ff4979cf3c938cfd0ca2c4daa6ff80749831add86bfab08d0b5e136d66336\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.240447 containerd[1508]: time="2025-05-15T11:54:27.240406660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675675fd54-pt9h5,Uid:4ae821ff-2451-4dd4-b829-888ac44a3d95,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9ff4979cf3c938cfd0ca2c4daa6ff80749831add86bfab08d0b5e136d66336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.240644 kubelet[2615]: E0515 11:54:27.240579 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9ff4979cf3c938cfd0ca2c4daa6ff80749831add86bfab08d0b5e136d66336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.240644 kubelet[2615]: E0515 11:54:27.240619 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9ff4979cf3c938cfd0ca2c4daa6ff80749831add86bfab08d0b5e136d66336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-675675fd54-pt9h5" May 15 11:54:27.240644 kubelet[2615]: E0515 11:54:27.240640 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9ff4979cf3c938cfd0ca2c4daa6ff80749831add86bfab08d0b5e136d66336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-675675fd54-pt9h5" May 15 11:54:27.240746 kubelet[2615]: E0515 11:54:27.240677 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-675675fd54-pt9h5_calico-system(4ae821ff-2451-4dd4-b829-888ac44a3d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-675675fd54-pt9h5_calico-system(4ae821ff-2451-4dd4-b829-888ac44a3d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a9ff4979cf3c938cfd0ca2c4daa6ff80749831add86bfab08d0b5e136d66336\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-675675fd54-pt9h5" podUID="4ae821ff-2451-4dd4-b829-888ac44a3d95" May 15 11:54:27.242790 containerd[1508]: time="2025-05-15T11:54:27.242747608Z" level=error msg="Failed to destroy network for sandbox \"f4fd5ea501d64d3f077ed1e8940ba716deaf4dc347a61afcd42b40d36b53dbb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.244102 containerd[1508]: time="2025-05-15T11:54:27.244052884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c5478b6-8djln,Uid:12f63fcf-8e10-4714-8b17-fc2a1f973263,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4fd5ea501d64d3f077ed1e8940ba716deaf4dc347a61afcd42b40d36b53dbb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.244438 kubelet[2615]: E0515 11:54:27.244378 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4fd5ea501d64d3f077ed1e8940ba716deaf4dc347a61afcd42b40d36b53dbb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.244558 kubelet[2615]: E0515 11:54:27.244537 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4fd5ea501d64d3f077ed1e8940ba716deaf4dc347a61afcd42b40d36b53dbb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c5478b6-8djln" May 15 11:54:27.244702 kubelet[2615]: E0515 11:54:27.244605 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4fd5ea501d64d3f077ed1e8940ba716deaf4dc347a61afcd42b40d36b53dbb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c5478b6-8djln" May 15 11:54:27.244702 kubelet[2615]: E0515 11:54:27.244647 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c5478b6-8djln_calico-apiserver(12f63fcf-8e10-4714-8b17-fc2a1f973263)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c5478b6-8djln_calico-apiserver(12f63fcf-8e10-4714-8b17-fc2a1f973263)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4fd5ea501d64d3f077ed1e8940ba716deaf4dc347a61afcd42b40d36b53dbb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c5478b6-8djln" podUID="12f63fcf-8e10-4714-8b17-fc2a1f973263" May 15 11:54:27.247092 containerd[1508]: time="2025-05-15T11:54:27.247048169Z" level=error msg="Failed to destroy network for sandbox \"c4855cfd116d39e1efca772f4ba5af6b32b3dc0303bb073c2ec57607258f2ac7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.247946 containerd[1508]: time="2025-05-15T11:54:27.247888455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvl6d,Uid:b2638139-58aa-4f7c-a7ba-8ff847602cba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4855cfd116d39e1efca772f4ba5af6b32b3dc0303bb073c2ec57607258f2ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.248674 kubelet[2615]: E0515 11:54:27.248624 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4855cfd116d39e1efca772f4ba5af6b32b3dc0303bb073c2ec57607258f2ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.248739 kubelet[2615]: E0515 11:54:27.248692 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4855cfd116d39e1efca772f4ba5af6b32b3dc0303bb073c2ec57607258f2ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pvl6d" May 15 11:54:27.248739 kubelet[2615]: E0515 11:54:27.248710 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4855cfd116d39e1efca772f4ba5af6b32b3dc0303bb073c2ec57607258f2ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pvl6d" May 15 11:54:27.248785 kubelet[2615]: E0515 11:54:27.248747 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pvl6d_kube-system(b2638139-58aa-4f7c-a7ba-8ff847602cba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pvl6d_kube-system(b2638139-58aa-4f7c-a7ba-8ff847602cba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4855cfd116d39e1efca772f4ba5af6b32b3dc0303bb073c2ec57607258f2ac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pvl6d" podUID="b2638139-58aa-4f7c-a7ba-8ff847602cba" May 15 11:54:27.250063 containerd[1508]: time="2025-05-15T11:54:27.250029818Z" level=error msg="Failed to destroy network for sandbox \"955a7b269e022d21ff57b29df120f7f958d984ae80ddd971d1caf0015c141690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.251313 containerd[1508]: time="2025-05-15T11:54:27.251270792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k75xk,Uid:a9aaee40-fbce-4f87-980b-c2aa9858f7e4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"955a7b269e022d21ff57b29df120f7f958d984ae80ddd971d1caf0015c141690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.251466 kubelet[2615]: E0515 11:54:27.251436 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955a7b269e022d21ff57b29df120f7f958d984ae80ddd971d1caf0015c141690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:54:27.251540 kubelet[2615]: E0515 11:54:27.251475 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955a7b269e022d21ff57b29df120f7f958d984ae80ddd971d1caf0015c141690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k75xk" May 15 11:54:27.251540 kubelet[2615]: E0515 11:54:27.251508 2615 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955a7b269e022d21ff57b29df120f7f958d984ae80ddd971d1caf0015c141690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k75xk" May 15 11:54:27.251614 kubelet[2615]: E0515 11:54:27.251555 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k75xk_kube-system(a9aaee40-fbce-4f87-980b-c2aa9858f7e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k75xk_kube-system(a9aaee40-fbce-4f87-980b-c2aa9858f7e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"955a7b269e022d21ff57b29df120f7f958d984ae80ddd971d1caf0015c141690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k75xk" podUID="a9aaee40-fbce-4f87-980b-c2aa9858f7e4" May 15 11:54:27.669152 systemd[1]: run-netns-cni\x2d6e0746d0\x2d382c\x2dee56\x2dec14\x2d769a3316fc9c.mount: Deactivated successfully. May 15 11:54:27.669397 systemd[1]: run-netns-cni\x2d581a10cf\x2d161e\x2d5673\x2d5073\x2d16cc59e1ce4d.mount: Deactivated successfully. May 15 11:54:27.669569 systemd[1]: run-netns-cni\x2d32ed17ee\x2d43f2\x2dfa26\x2d4cf2\x2d358a32a4e72d.mount: Deactivated successfully. May 15 11:54:30.937626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586872840.mount: Deactivated successfully. May 15 11:54:31.014224 containerd[1508]: time="2025-05-15T11:54:31.014160981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 11:54:31.018034 containerd[1508]: time="2025-05-15T11:54:31.017847516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.049427253s" May 15 11:54:31.018034 containerd[1508]: time="2025-05-15T11:54:31.017882747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 11:54:31.026780 containerd[1508]: time="2025-05-15T11:54:31.026722857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:31.027553 containerd[1508]: time="2025-05-15T11:54:31.027519221Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:31.028190 containerd[1508]: time="2025-05-15T11:54:31.028159944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:31.039812 containerd[1508]: time="2025-05-15T11:54:31.039758497Z" level=info msg="CreateContainer within sandbox \"ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 11:54:31.079783 containerd[1508]: time="2025-05-15T11:54:31.079559726Z" level=info msg="Container 295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:31.089394 containerd[1508]: time="2025-05-15T11:54:31.089342804Z" level=info msg="CreateContainer within sandbox \"ca578fba91a2a318aa0b91688cfc6263909eb09125e9d4d3a5950ac0da305950\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb\"" May 15 11:54:31.092510 containerd[1508]: time="2025-05-15T11:54:31.089932659Z" level=info msg="StartContainer for \"295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb\"" May 15 11:54:31.093689 containerd[1508]: time="2025-05-15T11:54:31.093642829Z" level=info msg="connecting to shim 295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb" address="unix:///run/containerd/s/efafd48b6811a7650ebc891d902db3dcbee49adb79a9f22f6e333a902e1d0cf0" protocol=ttrpc version=3 May 15 11:54:31.119718 systemd[1]: Started cri-containerd-295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb.scope - libcontainer container 295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb. May 15 11:54:31.159932 containerd[1508]: time="2025-05-15T11:54:31.158372018Z" level=info msg="StartContainer for \"295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb\" returns successfully" May 15 11:54:31.461957 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 11:54:31.462316 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 11:54:32.034102 kubelet[2615]: I0515 11:54:32.034017 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5g8sh" podStartSLOduration=1.730539216 podStartE2EDuration="15.033999998s" podCreationTimestamp="2025-05-15 11:54:17 +0000 UTC" firstStartedPulling="2025-05-15 11:54:17.715129831 +0000 UTC m=+13.905392203" lastFinishedPulling="2025-05-15 11:54:31.018590613 +0000 UTC m=+27.208852985" observedRunningTime="2025-05-15 11:54:32.033740099 +0000 UTC m=+28.224002471" watchObservedRunningTime="2025-05-15 11:54:32.033999998 +0000 UTC m=+28.224262370" May 15 11:54:33.005623 kubelet[2615]: I0515 11:54:33.005578 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:54:33.099013 systemd-networkd[1414]: vxlan.calico: Link UP May 15 11:54:33.099023 systemd-networkd[1414]: vxlan.calico: Gained carrier May 15 11:54:34.800180 systemd-networkd[1414]: vxlan.calico: Gained IPv6LL May 15 11:54:36.332160 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:39302.service - OpenSSH per-connection server daemon (10.0.0.1:39302). May 15 11:54:36.462346 sshd[3817]: Accepted publickey for core from 10.0.0.1 port 39302 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:36.463752 sshd-session[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:36.468726 systemd-logind[1480]: New session 8 of user core. May 15 11:54:36.477704 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 11:54:36.605806 sshd[3819]: Connection closed by 10.0.0.1 port 39302 May 15 11:54:36.606350 sshd-session[3817]: pam_unix(sshd:session): session closed for user core May 15 11:54:36.609842 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:39302.service: Deactivated successfully. May 15 11:54:36.611706 systemd[1]: session-8.scope: Deactivated successfully. May 15 11:54:36.612394 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. May 15 11:54:36.613713 systemd-logind[1480]: Removed session 8. May 15 11:54:37.889757 containerd[1508]: time="2025-05-15T11:54:37.889720372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvl6d,Uid:b2638139-58aa-4f7c-a7ba-8ff847602cba,Namespace:kube-system,Attempt:0,}" May 15 11:54:38.116045 systemd-networkd[1414]: cali77262c1672d: Link UP May 15 11:54:38.117532 systemd-networkd[1414]: cali77262c1672d: Gained carrier May 15 11:54:38.131153 containerd[1508]: 2025-05-15 11:54:37.979 [INFO][3833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0 coredns-668d6bf9bc- kube-system b2638139-58aa-4f7c-a7ba-8ff847602cba 667 0 2025-05-15 11:54:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-pvl6d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali77262c1672d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvl6d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pvl6d-" May 15 11:54:38.131153 containerd[1508]: 2025-05-15 11:54:37.979 [INFO][3833] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvl6d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" May 15 11:54:38.131153 containerd[1508]: 2025-05-15 11:54:38.070 [INFO][3846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" HandleID="k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Workload="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.087 [INFO][3846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" HandleID="k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Workload="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002786f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-pvl6d", "timestamp":"2025-05-15 11:54:38.070839663 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.087 [INFO][3846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.087 [INFO][3846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.087 [INFO][3846] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.089 [INFO][3846] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" host="localhost" May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.093 [INFO][3846] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.097 [INFO][3846] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.099 [INFO][3846] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.101 [INFO][3846] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 11:54:38.131352 containerd[1508]: 2025-05-15 11:54:38.101 [INFO][3846] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" host="localhost" May 15 11:54:38.131612 containerd[1508]: 2025-05-15 11:54:38.103 [INFO][3846] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105 May 15 11:54:38.131612 containerd[1508]: 2025-05-15 11:54:38.106 [INFO][3846] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" host="localhost" May 15 11:54:38.131612 containerd[1508]: 2025-05-15 11:54:38.110 [INFO][3846] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" host="localhost" May 15 11:54:38.131612 containerd[1508]: 2025-05-15 11:54:38.110 [INFO][3846] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" host="localhost" May 15 11:54:38.131612 containerd[1508]: 2025-05-15 11:54:38.110 [INFO][3846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:54:38.131612 containerd[1508]: 2025-05-15 11:54:38.110 [INFO][3846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" HandleID="k8s-pod-network.d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Workload="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" May 15 11:54:38.131773 containerd[1508]: 2025-05-15 11:54:38.113 [INFO][3833] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvl6d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b2638139-58aa-4f7c-a7ba-8ff847602cba", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-pvl6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77262c1672d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:38.131856 containerd[1508]: 2025-05-15 11:54:38.113 [INFO][3833] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvl6d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" May 15 11:54:38.131856 containerd[1508]: 2025-05-15 11:54:38.113 [INFO][3833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77262c1672d ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvl6d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" May 15 11:54:38.131856 containerd[1508]: 2025-05-15 11:54:38.117 [INFO][3833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvl6d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" May 15 11:54:38.131958 containerd[1508]: 2025-05-15 11:54:38.118 [INFO][3833] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvl6d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b2638139-58aa-4f7c-a7ba-8ff847602cba", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105", Pod:"coredns-668d6bf9bc-pvl6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77262c1672d", MAC:"ba:39:56:83:05:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:38.131958 containerd[1508]: 2025-05-15 11:54:38.126 [INFO][3833] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvl6d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pvl6d-eth0" May 15 11:54:38.225183 containerd[1508]: time="2025-05-15T11:54:38.225084584Z" level=info msg="connecting to shim d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105" address="unix:///run/containerd/s/b8a04b07065c50b4383947a18bd20f0bee64eac78b8a5db2ecfe1ee3738e9ac9" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:38.291690 systemd[1]: Started cri-containerd-d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105.scope - libcontainer container d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105. May 15 11:54:38.304595 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 11:54:38.326145 containerd[1508]: time="2025-05-15T11:54:38.326097049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvl6d,Uid:b2638139-58aa-4f7c-a7ba-8ff847602cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105\"" May 15 11:54:38.347373 containerd[1508]: time="2025-05-15T11:54:38.347326277Z" level=info msg="CreateContainer within sandbox \"d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 11:54:38.358776 containerd[1508]: time="2025-05-15T11:54:38.358225014Z" level=info msg="Container 790e92f2a69b25dd49b89bc6710d56fa9e78ee1c0684b8104e29cd01707b5b2f: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:38.363628 containerd[1508]: time="2025-05-15T11:54:38.363585681Z" level=info msg="CreateContainer within sandbox \"d036f9f9aa993189226e23ceb4e33b15ae80b65834bdf2d3841a72cf39813105\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"790e92f2a69b25dd49b89bc6710d56fa9e78ee1c0684b8104e29cd01707b5b2f\"" May 15 11:54:38.366153 containerd[1508]: time="2025-05-15T11:54:38.366128541Z" level=info msg="StartContainer for \"790e92f2a69b25dd49b89bc6710d56fa9e78ee1c0684b8104e29cd01707b5b2f\"" May 15 11:54:38.367444 containerd[1508]: time="2025-05-15T11:54:38.367170016Z" level=info msg="connecting to shim 790e92f2a69b25dd49b89bc6710d56fa9e78ee1c0684b8104e29cd01707b5b2f" address="unix:///run/containerd/s/b8a04b07065c50b4383947a18bd20f0bee64eac78b8a5db2ecfe1ee3738e9ac9" protocol=ttrpc version=3 May 15 11:54:38.388671 systemd[1]: Started cri-containerd-790e92f2a69b25dd49b89bc6710d56fa9e78ee1c0684b8104e29cd01707b5b2f.scope - libcontainer container 790e92f2a69b25dd49b89bc6710d56fa9e78ee1c0684b8104e29cd01707b5b2f. May 15 11:54:38.416525 containerd[1508]: time="2025-05-15T11:54:38.416457968Z" level=info msg="StartContainer for \"790e92f2a69b25dd49b89bc6710d56fa9e78ee1c0684b8104e29cd01707b5b2f\" returns successfully" May 15 11:54:39.044796 kubelet[2615]: I0515 11:54:39.044678 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pvl6d" podStartSLOduration=29.044660932 podStartE2EDuration="29.044660932s" podCreationTimestamp="2025-05-15 11:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:54:39.044412659 +0000 UTC m=+35.234675031" watchObservedRunningTime="2025-05-15 11:54:39.044660932 +0000 UTC m=+35.234923304" May 15 11:54:39.468685 systemd-networkd[1414]: cali77262c1672d: Gained IPv6LL May 15 11:54:39.886168 containerd[1508]: time="2025-05-15T11:54:39.886118265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x62nn,Uid:dbc4a777-ce62-4800-8ac3-f6b4ef044eab,Namespace:calico-system,Attempt:0,}" May 15 11:54:39.886712 containerd[1508]: time="2025-05-15T11:54:39.886684717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675675fd54-pt9h5,Uid:4ae821ff-2451-4dd4-b829-888ac44a3d95,Namespace:calico-system,Attempt:0,}" May 15 11:54:39.953289 kubelet[2615]: I0515 11:54:39.953245 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:54:40.037676 systemd-networkd[1414]: cali0dbd294617f: Link UP May 15 11:54:40.038573 systemd-networkd[1414]: cali0dbd294617f: Gained carrier May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:39.939 [INFO][3966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x62nn-eth0 csi-node-driver- calico-system dbc4a777-ce62-4800-8ac3-f6b4ef044eab 581 0 2025-05-15 11:54:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x62nn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0dbd294617f [] []}} ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Namespace="calico-system" Pod="csi-node-driver-x62nn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x62nn-" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:39.939 [INFO][3966] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Namespace="calico-system" Pod="csi-node-driver-x62nn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x62nn-eth0" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:39.981 [INFO][3994] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" HandleID="k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Workload="localhost-k8s-csi--node--driver--x62nn-eth0" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:39.999 [INFO][3994] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" HandleID="k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Workload="localhost-k8s-csi--node--driver--x62nn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000375320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x62nn", "timestamp":"2025-05-15 11:54:39.981416959 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.000 [INFO][3994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.000 [INFO][3994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.000 [INFO][3994] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.002 [INFO][3994] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.006 [INFO][3994] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.013 [INFO][3994] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.015 [INFO][3994] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.017 [INFO][3994] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.018 [INFO][3994] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.019 [INFO][3994] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.026 [INFO][3994] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.031 [INFO][3994] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.031 [INFO][3994] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" host="localhost" May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.031 [INFO][3994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:54:40.052722 containerd[1508]: 2025-05-15 11:54:40.031 [INFO][3994] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" HandleID="k8s-pod-network.5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Workload="localhost-k8s-csi--node--driver--x62nn-eth0" May 15 11:54:40.053530 containerd[1508]: 2025-05-15 11:54:40.034 [INFO][3966] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Namespace="calico-system" Pod="csi-node-driver-x62nn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x62nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x62nn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbc4a777-ce62-4800-8ac3-f6b4ef044eab", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x62nn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0dbd294617f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:40.053530 containerd[1508]: 2025-05-15 11:54:40.034 [INFO][3966] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Namespace="calico-system" Pod="csi-node-driver-x62nn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x62nn-eth0" May 15 11:54:40.053530 containerd[1508]: 2025-05-15 11:54:40.034 [INFO][3966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dbd294617f ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Namespace="calico-system" Pod="csi-node-driver-x62nn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x62nn-eth0" May 15 11:54:40.053530 containerd[1508]: 2025-05-15 11:54:40.038 [INFO][3966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Namespace="calico-system" Pod="csi-node-driver-x62nn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x62nn-eth0" May 15 11:54:40.053530 containerd[1508]: 2025-05-15 11:54:40.039 [INFO][3966] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Namespace="calico-system" Pod="csi-node-driver-x62nn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x62nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x62nn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbc4a777-ce62-4800-8ac3-f6b4ef044eab", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad", Pod:"csi-node-driver-x62nn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0dbd294617f", MAC:"2a:09:bb:9f:0f:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:40.053530 containerd[1508]: 2025-05-15 11:54:40.049 [INFO][3966] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" Namespace="calico-system" Pod="csi-node-driver-x62nn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x62nn-eth0" May 15 11:54:40.096649 containerd[1508]: time="2025-05-15T11:54:40.096600558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb\" id:\"ebeeabdca32656975d71b4412466bc71af1f2f90b90527328ffa64deafb5f591\" pid:4024 exited_at:{seconds:1747310080 nanos:96297694}" May 15 11:54:40.101136 containerd[1508]: time="2025-05-15T11:54:40.101098568Z" level=info msg="connecting to shim 5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad" address="unix:///run/containerd/s/5df850e28510318d6735898cc81c8a114eecbb5215e1b4936d09f06e03a37644" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:40.141766 systemd[1]: Started cri-containerd-5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad.scope - libcontainer container 5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad. May 15 11:54:40.159781 systemd-networkd[1414]: cali974ddff8caa: Link UP May 15 11:54:40.160000 systemd-networkd[1414]: cali974ddff8caa: Gained carrier May 15 11:54:40.177358 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:39.946 [INFO][3977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0 calico-kube-controllers-675675fd54- calico-system 4ae821ff-2451-4dd4-b829-888ac44a3d95 668 0 2025-05-15 11:54:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:675675fd54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-675675fd54-pt9h5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali974ddff8caa [] []}} ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Namespace="calico-system" Pod="calico-kube-controllers-675675fd54-pt9h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:39.946 [INFO][3977] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Namespace="calico-system" Pod="calico-kube-controllers-675675fd54-pt9h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:39.987 [INFO][4000] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" HandleID="k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Workload="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.000 [INFO][4000] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" HandleID="k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Workload="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000362a50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-675675fd54-pt9h5", "timestamp":"2025-05-15 11:54:39.987880088 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.000 [INFO][4000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.032 [INFO][4000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.032 [INFO][4000] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.105 [INFO][4000] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.110 [INFO][4000] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.116 [INFO][4000] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.120 [INFO][4000] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.126 [INFO][4000] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.127 [INFO][4000] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.132 [INFO][4000] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.139 [INFO][4000] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.150 [INFO][4000] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.150 [INFO][4000] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" host="localhost" May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.150 [INFO][4000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:54:40.182842 containerd[1508]: 2025-05-15 11:54:40.150 [INFO][4000] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" HandleID="k8s-pod-network.03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Workload="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" May 15 11:54:40.183392 containerd[1508]: 2025-05-15 11:54:40.155 [INFO][3977] cni-plugin/k8s.go 386: Populated endpoint ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Namespace="calico-system" Pod="calico-kube-controllers-675675fd54-pt9h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0", GenerateName:"calico-kube-controllers-675675fd54-", Namespace:"calico-system", SelfLink:"", UID:"4ae821ff-2451-4dd4-b829-888ac44a3d95", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"675675fd54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-675675fd54-pt9h5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali974ddff8caa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:40.183392 containerd[1508]: 2025-05-15 11:54:40.155 [INFO][3977] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Namespace="calico-system" Pod="calico-kube-controllers-675675fd54-pt9h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" May 15 11:54:40.183392 containerd[1508]: 2025-05-15 11:54:40.155 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali974ddff8caa ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Namespace="calico-system" Pod="calico-kube-controllers-675675fd54-pt9h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" May 15 11:54:40.183392 containerd[1508]: 2025-05-15 11:54:40.157 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Namespace="calico-system" Pod="calico-kube-controllers-675675fd54-pt9h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" May 15 11:54:40.183392 containerd[1508]: 2025-05-15 11:54:40.158 [INFO][3977] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Namespace="calico-system" Pod="calico-kube-controllers-675675fd54-pt9h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0", GenerateName:"calico-kube-controllers-675675fd54-", Namespace:"calico-system", SelfLink:"", UID:"4ae821ff-2451-4dd4-b829-888ac44a3d95", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"675675fd54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd", Pod:"calico-kube-controllers-675675fd54-pt9h5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali974ddff8caa", MAC:"c6:5f:42:ae:7b:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:40.183392 containerd[1508]: 2025-05-15 11:54:40.171 [INFO][3977] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" Namespace="calico-system" Pod="calico-kube-controllers-675675fd54-pt9h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675675fd54--pt9h5-eth0" May 15 11:54:40.210479 containerd[1508]: time="2025-05-15T11:54:40.210433000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb\" id:\"8cd6cafaf3eb0b2d8e6af97c6a271a0f5112ca6a06dd3a51c8be7bfbab31be2f\" pid:4095 exited_at:{seconds:1747310080 nanos:210182646}" May 15 11:54:40.230168 containerd[1508]: time="2025-05-15T11:54:40.230117769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x62nn,Uid:dbc4a777-ce62-4800-8ac3-f6b4ef044eab,Namespace:calico-system,Attempt:0,} returns sandbox id \"5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad\"" May 15 11:54:40.232187 containerd[1508]: time="2025-05-15T11:54:40.232152593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 11:54:40.249477 containerd[1508]: time="2025-05-15T11:54:40.249434325Z" level=info msg="connecting to shim 03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd" address="unix:///run/containerd/s/25f601926d86a2b8de9ff9675b7794568e51afba60f3cc26b5f52b36df933fe0" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:40.276680 systemd[1]: Started cri-containerd-03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd.scope - libcontainer container 03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd. May 15 11:54:40.288849 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 11:54:40.311520 containerd[1508]: time="2025-05-15T11:54:40.311453325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675675fd54-pt9h5,Uid:4ae821ff-2451-4dd4-b829-888ac44a3d95,Namespace:calico-system,Attempt:0,} returns sandbox id \"03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd\"" May 15 11:54:41.419897 containerd[1508]: time="2025-05-15T11:54:41.419848473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:41.420521 containerd[1508]: time="2025-05-15T11:54:41.420474041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 11:54:41.421511 containerd[1508]: time="2025-05-15T11:54:41.421415633Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:41.423580 containerd[1508]: time="2025-05-15T11:54:41.423551651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:41.424847 containerd[1508]: time="2025-05-15T11:54:41.424736920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.192545214s" May 15 11:54:41.424847 containerd[1508]: time="2025-05-15T11:54:41.424768674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 11:54:41.428081 containerd[1508]: time="2025-05-15T11:54:41.428047728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 11:54:41.430520 containerd[1508]: time="2025-05-15T11:54:41.429643163Z" level=info msg="CreateContainer within sandbox \"5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 11:54:41.454521 containerd[1508]: time="2025-05-15T11:54:41.452027483Z" level=info msg="Container ef2353430e3e25bab84e31a9635109a31aa5eaea086f034fac7440ed02354bbe: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:41.458639 containerd[1508]: time="2025-05-15T11:54:41.458598749Z" level=info msg="CreateContainer within sandbox \"5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ef2353430e3e25bab84e31a9635109a31aa5eaea086f034fac7440ed02354bbe\"" May 15 11:54:41.459588 containerd[1508]: time="2025-05-15T11:54:41.459532862Z" level=info msg="StartContainer for \"ef2353430e3e25bab84e31a9635109a31aa5eaea086f034fac7440ed02354bbe\"" May 15 11:54:41.461665 containerd[1508]: time="2025-05-15T11:54:41.461130216Z" level=info msg="connecting to shim ef2353430e3e25bab84e31a9635109a31aa5eaea086f034fac7440ed02354bbe" address="unix:///run/containerd/s/5df850e28510318d6735898cc81c8a114eecbb5215e1b4936d09f06e03a37644" protocol=ttrpc version=3 May 15 11:54:41.482669 systemd[1]: Started cri-containerd-ef2353430e3e25bab84e31a9635109a31aa5eaea086f034fac7440ed02354bbe.scope - libcontainer container ef2353430e3e25bab84e31a9635109a31aa5eaea086f034fac7440ed02354bbe. May 15 11:54:41.529873 containerd[1508]: time="2025-05-15T11:54:41.528626995Z" level=info msg="StartContainer for \"ef2353430e3e25bab84e31a9635109a31aa5eaea086f034fac7440ed02354bbe\" returns successfully" May 15 11:54:41.629035 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:39314.service - OpenSSH per-connection server daemon (10.0.0.1:39314). May 15 11:54:41.644728 systemd-networkd[1414]: cali0dbd294617f: Gained IPv6LL May 15 11:54:41.681178 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 39314 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:41.683057 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:41.687811 systemd-logind[1480]: New session 9 of user core. May 15 11:54:41.696672 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 11:54:41.850593 sshd[4222]: Connection closed by 10.0.0.1 port 39314 May 15 11:54:41.850977 sshd-session[4220]: pam_unix(sshd:session): session closed for user core May 15 11:54:41.854692 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:39314.service: Deactivated successfully. May 15 11:54:41.856808 systemd[1]: session-9.scope: Deactivated successfully. May 15 11:54:41.857573 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. May 15 11:54:41.858677 systemd-logind[1480]: Removed session 9. May 15 11:54:41.902742 containerd[1508]: time="2025-05-15T11:54:41.902690070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c5478b6-4kpt7,Uid:73e051ec-f022-4ded-b603-a96eab642f9c,Namespace:calico-apiserver,Attempt:0,}" May 15 11:54:42.038551 systemd-networkd[1414]: calib9df2a82d70: Link UP May 15 11:54:42.038829 systemd-networkd[1414]: calib9df2a82d70: Gained carrier May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.941 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0 calico-apiserver-76c5478b6- calico-apiserver 73e051ec-f022-4ded-b603-a96eab642f9c 670 0 2025-05-15 11:54:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76c5478b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76c5478b6-4kpt7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9df2a82d70 [] []}} ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-4kpt7" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.941 [INFO][4235] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-4kpt7" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.973 [INFO][4250] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" HandleID="k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Workload="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.986 [INFO][4250] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" HandleID="k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Workload="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000429550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76c5478b6-4kpt7", "timestamp":"2025-05-15 11:54:41.973485899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.986 [INFO][4250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.986 [INFO][4250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.987 [INFO][4250] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.989 [INFO][4250] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.993 [INFO][4250] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:41.999 [INFO][4250] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.001 [INFO][4250] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.003 [INFO][4250] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.003 [INFO][4250] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.006 [INFO][4250] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0 May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.020 [INFO][4250] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.033 [INFO][4250] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.033 [INFO][4250] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" host="localhost" May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.033 [INFO][4250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:54:42.052925 containerd[1508]: 2025-05-15 11:54:42.033 [INFO][4250] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" HandleID="k8s-pod-network.86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Workload="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" May 15 11:54:42.054694 containerd[1508]: 2025-05-15 11:54:42.036 [INFO][4235] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-4kpt7" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0", GenerateName:"calico-apiserver-76c5478b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"73e051ec-f022-4ded-b603-a96eab642f9c", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c5478b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76c5478b6-4kpt7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9df2a82d70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:42.054694 containerd[1508]: 2025-05-15 11:54:42.036 [INFO][4235] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-4kpt7" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" May 15 11:54:42.054694 containerd[1508]: 2025-05-15 11:54:42.036 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9df2a82d70 ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-4kpt7" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" May 15 11:54:42.054694 containerd[1508]: 2025-05-15 11:54:42.038 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-4kpt7" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" May 15 11:54:42.054694 containerd[1508]: 2025-05-15 11:54:42.039 [INFO][4235] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-4kpt7" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0", GenerateName:"calico-apiserver-76c5478b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"73e051ec-f022-4ded-b603-a96eab642f9c", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c5478b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0", Pod:"calico-apiserver-76c5478b6-4kpt7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9df2a82d70", MAC:"c2:28:f5:e3:61:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:42.054694 containerd[1508]: 2025-05-15 11:54:42.050 [INFO][4235] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-4kpt7" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--4kpt7-eth0" May 15 11:54:42.081889 containerd[1508]: time="2025-05-15T11:54:42.080335685Z" level=info msg="connecting to shim 86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0" address="unix:///run/containerd/s/4d5f62dd25b60dafa533762426af4a1bda0eb77d5ec2fc805a43da68ec4de9e5" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:42.092605 systemd-networkd[1414]: cali974ddff8caa: Gained IPv6LL May 15 11:54:42.104687 systemd[1]: Started cri-containerd-86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0.scope - libcontainer container 86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0. May 15 11:54:42.120143 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 11:54:42.157691 containerd[1508]: time="2025-05-15T11:54:42.157575874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c5478b6-4kpt7,Uid:73e051ec-f022-4ded-b603-a96eab642f9c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0\"" May 15 11:54:42.886898 containerd[1508]: time="2025-05-15T11:54:42.886613748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k75xk,Uid:a9aaee40-fbce-4f87-980b-c2aa9858f7e4,Namespace:kube-system,Attempt:0,}" May 15 11:54:42.887261 containerd[1508]: time="2025-05-15T11:54:42.886739006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c5478b6-8djln,Uid:12f63fcf-8e10-4714-8b17-fc2a1f973263,Namespace:calico-apiserver,Attempt:0,}" May 15 11:54:43.042327 systemd-networkd[1414]: cali7d83e4d4f0c: Link UP May 15 11:54:43.043152 systemd-networkd[1414]: cali7d83e4d4f0c: Gained carrier May 15 11:54:43.052791 systemd-networkd[1414]: calib9df2a82d70: Gained IPv6LL May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:42.943 [INFO][4329] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0 calico-apiserver-76c5478b6- calico-apiserver 12f63fcf-8e10-4714-8b17-fc2a1f973263 669 0 2025-05-15 11:54:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76c5478b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76c5478b6-8djln eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7d83e4d4f0c [] []}} ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-8djln" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--8djln-" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:42.943 [INFO][4329] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-8djln" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:42.982 [INFO][4362] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" HandleID="k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Workload="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:42.997 [INFO][4362] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" HandleID="k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Workload="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e0010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76c5478b6-8djln", "timestamp":"2025-05-15 11:54:42.982337856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:42.997 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:42.997 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:42.997 [INFO][4362] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:42.999 [INFO][4362] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.004 [INFO][4362] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.010 [INFO][4362] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.012 [INFO][4362] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.016 [INFO][4362] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.016 [INFO][4362] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.018 [INFO][4362] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.028 [INFO][4362] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.036 [INFO][4362] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.036 [INFO][4362] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" host="localhost" May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.036 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:54:43.057692 containerd[1508]: 2025-05-15 11:54:43.036 [INFO][4362] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" HandleID="k8s-pod-network.4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Workload="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" May 15 11:54:43.058554 containerd[1508]: 2025-05-15 11:54:43.039 [INFO][4329] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-8djln" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0", GenerateName:"calico-apiserver-76c5478b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"12f63fcf-8e10-4714-8b17-fc2a1f973263", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c5478b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76c5478b6-8djln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d83e4d4f0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:43.058554 containerd[1508]: 2025-05-15 11:54:43.039 [INFO][4329] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-8djln" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" May 15 11:54:43.058554 containerd[1508]: 2025-05-15 11:54:43.039 [INFO][4329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d83e4d4f0c ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-8djln" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" May 15 11:54:43.058554 containerd[1508]: 2025-05-15 11:54:43.043 [INFO][4329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-8djln" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" May 15 11:54:43.058554 containerd[1508]: 2025-05-15 11:54:43.044 [INFO][4329] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-8djln" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0", GenerateName:"calico-apiserver-76c5478b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"12f63fcf-8e10-4714-8b17-fc2a1f973263", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c5478b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f", Pod:"calico-apiserver-76c5478b6-8djln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d83e4d4f0c", MAC:"52:e8:47:7f:ce:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:43.058554 containerd[1508]: 2025-05-15 11:54:43.055 [INFO][4329] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" Namespace="calico-apiserver" Pod="calico-apiserver-76c5478b6-8djln" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c5478b6--8djln-eth0" May 15 11:54:43.107347 containerd[1508]: time="2025-05-15T11:54:43.107298237Z" level=info msg="connecting to shim 4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f" address="unix:///run/containerd/s/ef6b21458a08dd1e36544380c2203d2ed480809e0a7ace9b8a89a5e931cdde0e" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:43.153316 systemd-networkd[1414]: calic7a67891c43: Link UP May 15 11:54:43.153907 systemd-networkd[1414]: calic7a67891c43: Gained carrier May 15 11:54:43.154802 systemd[1]: Started cri-containerd-4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f.scope - libcontainer container 4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f. May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:42.937 [INFO][4322] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--k75xk-eth0 coredns-668d6bf9bc- kube-system a9aaee40-fbce-4f87-980b-c2aa9858f7e4 665 0 2025-05-15 11:54:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-k75xk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic7a67891c43 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Namespace="kube-system" Pod="coredns-668d6bf9bc-k75xk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k75xk-" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:42.937 [INFO][4322] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Namespace="kube-system" Pod="coredns-668d6bf9bc-k75xk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:42.979 [INFO][4356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" HandleID="k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Workload="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:42.997 [INFO][4356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" HandleID="k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Workload="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d94e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-k75xk", "timestamp":"2025-05-15 11:54:42.979682476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:42.997 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.036 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.036 [INFO][4356] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.101 [INFO][4356] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.109 [INFO][4356] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.117 [INFO][4356] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.120 [INFO][4356] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.125 [INFO][4356] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.125 [INFO][4356] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.128 [INFO][4356] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716 May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.133 [INFO][4356] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.142 [INFO][4356] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.142 [INFO][4356] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" host="localhost" May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.142 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:54:43.172508 containerd[1508]: 2025-05-15 11:54:43.143 [INFO][4356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" HandleID="k8s-pod-network.2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Workload="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" May 15 11:54:43.173057 containerd[1508]: 2025-05-15 11:54:43.150 [INFO][4322] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Namespace="kube-system" Pod="coredns-668d6bf9bc-k75xk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--k75xk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a9aaee40-fbce-4f87-980b-c2aa9858f7e4", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-k75xk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7a67891c43", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:43.173057 containerd[1508]: 2025-05-15 11:54:43.150 [INFO][4322] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Namespace="kube-system" Pod="coredns-668d6bf9bc-k75xk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" May 15 11:54:43.173057 containerd[1508]: 2025-05-15 11:54:43.150 [INFO][4322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7a67891c43 ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Namespace="kube-system" Pod="coredns-668d6bf9bc-k75xk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" May 15 11:54:43.173057 containerd[1508]: 2025-05-15 11:54:43.153 [INFO][4322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Namespace="kube-system" Pod="coredns-668d6bf9bc-k75xk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" May 15 11:54:43.173057 containerd[1508]: 2025-05-15 11:54:43.153 [INFO][4322] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Namespace="kube-system" Pod="coredns-668d6bf9bc-k75xk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--k75xk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a9aaee40-fbce-4f87-980b-c2aa9858f7e4", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716", Pod:"coredns-668d6bf9bc-k75xk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7a67891c43", MAC:"c6:e6:14:fd:7c:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:54:43.173057 containerd[1508]: 2025-05-15 11:54:43.166 [INFO][4322] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" Namespace="kube-system" Pod="coredns-668d6bf9bc-k75xk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k75xk-eth0" May 15 11:54:43.178887 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 11:54:43.207418 containerd[1508]: time="2025-05-15T11:54:43.207367295Z" level=info msg="connecting to shim 2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716" address="unix:///run/containerd/s/fdf96c6d1fe289b8fcb72f8b9c443d08f9a07c0936805c24754c04c53f4320f8" namespace=k8s.io protocol=ttrpc version=3 May 15 11:54:43.219713 containerd[1508]: time="2025-05-15T11:54:43.219670631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c5478b6-8djln,Uid:12f63fcf-8e10-4714-8b17-fc2a1f973263,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f\"" May 15 11:54:43.239729 systemd[1]: Started cri-containerd-2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716.scope - libcontainer container 2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716. May 15 11:54:43.255697 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 11:54:43.284193 containerd[1508]: time="2025-05-15T11:54:43.284150258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k75xk,Uid:a9aaee40-fbce-4f87-980b-c2aa9858f7e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716\"" May 15 11:54:43.289146 containerd[1508]: time="2025-05-15T11:54:43.289100228Z" level=info msg="CreateContainer within sandbox \"2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 11:54:43.296553 containerd[1508]: time="2025-05-15T11:54:43.296499507Z" level=info msg="Container 9b18936da96f9d2034149fba03224a8aa347033e09abc9f48b0d684ade55fe8d: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:43.304276 containerd[1508]: time="2025-05-15T11:54:43.304227411Z" level=info msg="CreateContainer within sandbox \"2b61372971284bcb2e87b537304d37b106a8d67f25dd7317499290856002d716\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b18936da96f9d2034149fba03224a8aa347033e09abc9f48b0d684ade55fe8d\"" May 15 11:54:43.305831 containerd[1508]: time="2025-05-15T11:54:43.305796428Z" level=info msg="StartContainer for \"9b18936da96f9d2034149fba03224a8aa347033e09abc9f48b0d684ade55fe8d\"" May 15 11:54:43.308401 containerd[1508]: time="2025-05-15T11:54:43.308348720Z" level=info msg="connecting to shim 9b18936da96f9d2034149fba03224a8aa347033e09abc9f48b0d684ade55fe8d" address="unix:///run/containerd/s/fdf96c6d1fe289b8fcb72f8b9c443d08f9a07c0936805c24754c04c53f4320f8" protocol=ttrpc version=3 May 15 11:54:43.341780 systemd[1]: Started cri-containerd-9b18936da96f9d2034149fba03224a8aa347033e09abc9f48b0d684ade55fe8d.scope - libcontainer container 9b18936da96f9d2034149fba03224a8aa347033e09abc9f48b0d684ade55fe8d. May 15 11:54:43.351684 containerd[1508]: time="2025-05-15T11:54:43.351612904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:43.355356 containerd[1508]: time="2025-05-15T11:54:43.355087642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 11:54:43.356296 containerd[1508]: time="2025-05-15T11:54:43.356249647Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:43.360655 containerd[1508]: time="2025-05-15T11:54:43.360602197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:43.361686 containerd[1508]: time="2025-05-15T11:54:43.361653780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.933570419s" May 15 11:54:43.361759 containerd[1508]: time="2025-05-15T11:54:43.361691374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 11:54:43.365137 containerd[1508]: time="2025-05-15T11:54:43.365101082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 11:54:43.370816 containerd[1508]: time="2025-05-15T11:54:43.370712381Z" level=info msg="StartContainer for \"9b18936da96f9d2034149fba03224a8aa347033e09abc9f48b0d684ade55fe8d\" returns successfully" May 15 11:54:43.374052 containerd[1508]: time="2025-05-15T11:54:43.374004509Z" level=info msg="CreateContainer within sandbox \"03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 11:54:43.381070 containerd[1508]: time="2025-05-15T11:54:43.381016693Z" level=info msg="Container fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:43.426203 containerd[1508]: time="2025-05-15T11:54:43.425997870Z" level=info msg="CreateContainer within sandbox \"03fb074b0e4d74a29983dbc0507f4b980024d50aa2382e75f45377455e938ccd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c\"" May 15 11:54:43.426894 containerd[1508]: time="2025-05-15T11:54:43.426758422Z" level=info msg="StartContainer for \"fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c\"" May 15 11:54:43.428827 containerd[1508]: time="2025-05-15T11:54:43.428793081Z" level=info msg="connecting to shim fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c" address="unix:///run/containerd/s/25f601926d86a2b8de9ff9675b7794568e51afba60f3cc26b5f52b36df933fe0" protocol=ttrpc version=3 May 15 11:54:43.449717 systemd[1]: Started cri-containerd-fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c.scope - libcontainer container fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c. May 15 11:54:43.491687 containerd[1508]: time="2025-05-15T11:54:43.491649820Z" level=info msg="StartContainer for \"fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c\" returns successfully" May 15 11:54:44.072957 kubelet[2615]: I0515 11:54:44.071485 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k75xk" podStartSLOduration=34.071265173 podStartE2EDuration="34.071265173s" podCreationTimestamp="2025-05-15 11:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:54:44.070738778 +0000 UTC m=+40.261001150" watchObservedRunningTime="2025-05-15 11:54:44.071265173 +0000 UTC m=+40.261527545" May 15 11:54:44.972686 systemd-networkd[1414]: cali7d83e4d4f0c: Gained IPv6LL May 15 11:54:44.985629 containerd[1508]: time="2025-05-15T11:54:44.985576233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:44.993622 containerd[1508]: time="2025-05-15T11:54:44.993585531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 11:54:44.999455 containerd[1508]: time="2025-05-15T11:54:44.999415024Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:45.001353 containerd[1508]: time="2025-05-15T11:54:45.001304601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:45.001932 containerd[1508]: time="2025-05-15T11:54:45.001905467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.636570224s" May 15 11:54:45.001978 containerd[1508]: time="2025-05-15T11:54:45.001934902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 11:54:45.003348 containerd[1508]: time="2025-05-15T11:54:45.003075523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 11:54:45.004208 containerd[1508]: time="2025-05-15T11:54:45.004134676Z" level=info msg="CreateContainer within sandbox \"5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 11:54:45.021985 containerd[1508]: time="2025-05-15T11:54:45.021667797Z" level=info msg="Container 39225db7fa0ada368a306daa8b5972eaa312d2dc3c46e8a0754dc0ef7e4aacb8: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:45.032208 containerd[1508]: time="2025-05-15T11:54:45.032151747Z" level=info msg="CreateContainer within sandbox \"5293240fe1000494950cd83c11fcafac9031e56caa025adbcc2b2db1ec141aad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"39225db7fa0ada368a306daa8b5972eaa312d2dc3c46e8a0754dc0ef7e4aacb8\"" May 15 11:54:45.032822 containerd[1508]: time="2025-05-15T11:54:45.032721737Z" level=info msg="StartContainer for \"39225db7fa0ada368a306daa8b5972eaa312d2dc3c46e8a0754dc0ef7e4aacb8\"" May 15 11:54:45.034269 containerd[1508]: time="2025-05-15T11:54:45.034243857Z" level=info msg="connecting to shim 39225db7fa0ada368a306daa8b5972eaa312d2dc3c46e8a0754dc0ef7e4aacb8" address="unix:///run/containerd/s/5df850e28510318d6735898cc81c8a114eecbb5215e1b4936d09f06e03a37644" protocol=ttrpc version=3 May 15 11:54:45.038005 systemd-networkd[1414]: calic7a67891c43: Gained IPv6LL May 15 11:54:45.057645 systemd[1]: Started cri-containerd-39225db7fa0ada368a306daa8b5972eaa312d2dc3c46e8a0754dc0ef7e4aacb8.scope - libcontainer container 39225db7fa0ada368a306daa8b5972eaa312d2dc3c46e8a0754dc0ef7e4aacb8. May 15 11:54:45.067828 kubelet[2615]: I0515 11:54:45.067797 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:54:45.103672 containerd[1508]: time="2025-05-15T11:54:45.103623098Z" level=info msg="StartContainer for \"39225db7fa0ada368a306daa8b5972eaa312d2dc3c46e8a0754dc0ef7e4aacb8\" returns successfully" May 15 11:54:45.970516 kubelet[2615]: I0515 11:54:45.970459 2615 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 11:54:45.970516 kubelet[2615]: I0515 11:54:45.970522 2615 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 11:54:46.085479 kubelet[2615]: I0515 11:54:46.084918 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-675675fd54-pt9h5" podStartSLOduration=26.034066699 podStartE2EDuration="29.084785734s" podCreationTimestamp="2025-05-15 11:54:17 +0000 UTC" firstStartedPulling="2025-05-15 11:54:40.313008358 +0000 UTC m=+36.503270730" lastFinishedPulling="2025-05-15 11:54:43.363727393 +0000 UTC m=+39.553989765" observedRunningTime="2025-05-15 11:54:44.112074543 +0000 UTC m=+40.302336955" watchObservedRunningTime="2025-05-15 11:54:46.084785734 +0000 UTC m=+42.275048186" May 15 11:54:46.870752 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). May 15 11:54:46.934994 sshd[4615]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:46.937018 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:46.943516 systemd-logind[1480]: New session 10 of user core. May 15 11:54:46.949648 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 11:54:47.024051 containerd[1508]: time="2025-05-15T11:54:47.024007846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:47.024926 containerd[1508]: time="2025-05-15T11:54:47.024752416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 11:54:47.025656 containerd[1508]: time="2025-05-15T11:54:47.025625047Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:47.027739 containerd[1508]: time="2025-05-15T11:54:47.027702420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:47.028633 containerd[1508]: time="2025-05-15T11:54:47.028592689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.025482331s" May 15 11:54:47.028721 containerd[1508]: time="2025-05-15T11:54:47.028635082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 11:54:47.031143 containerd[1508]: time="2025-05-15T11:54:47.031116396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 11:54:47.032826 containerd[1508]: time="2025-05-15T11:54:47.032708481Z" level=info msg="CreateContainer within sandbox \"86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 11:54:47.041328 containerd[1508]: time="2025-05-15T11:54:47.039802873Z" level=info msg="Container 304d27c4ff713a82d5edc382b9bb8216afb60942fa3e1383a850b84e78945b73: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:47.048203 containerd[1508]: time="2025-05-15T11:54:47.048025138Z" level=info msg="CreateContainer within sandbox \"86d91a19bdae15d4ae5dac08a6dd6347dcf1382b15d0ae5dd09b4d65e906f1e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"304d27c4ff713a82d5edc382b9bb8216afb60942fa3e1383a850b84e78945b73\"" May 15 11:54:47.048551 containerd[1508]: time="2025-05-15T11:54:47.048471233Z" level=info msg="StartContainer for \"304d27c4ff713a82d5edc382b9bb8216afb60942fa3e1383a850b84e78945b73\"" May 15 11:54:47.050281 containerd[1508]: time="2025-05-15T11:54:47.050202137Z" level=info msg="connecting to shim 304d27c4ff713a82d5edc382b9bb8216afb60942fa3e1383a850b84e78945b73" address="unix:///run/containerd/s/4d5f62dd25b60dafa533762426af4a1bda0eb77d5ec2fc805a43da68ec4de9e5" protocol=ttrpc version=3 May 15 11:54:47.074761 systemd[1]: Started cri-containerd-304d27c4ff713a82d5edc382b9bb8216afb60942fa3e1383a850b84e78945b73.scope - libcontainer container 304d27c4ff713a82d5edc382b9bb8216afb60942fa3e1383a850b84e78945b73. May 15 11:54:47.117607 containerd[1508]: time="2025-05-15T11:54:47.117509076Z" level=info msg="StartContainer for \"304d27c4ff713a82d5edc382b9bb8216afb60942fa3e1383a850b84e78945b73\" returns successfully" May 15 11:54:47.154004 sshd[4617]: Connection closed by 10.0.0.1 port 60006 May 15 11:54:47.155030 sshd-session[4615]: pam_unix(sshd:session): session closed for user core May 15 11:54:47.165663 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:60006.service: Deactivated successfully. May 15 11:54:47.167280 systemd[1]: session-10.scope: Deactivated successfully. May 15 11:54:47.169456 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. May 15 11:54:47.173368 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:60008.service - OpenSSH per-connection server daemon (10.0.0.1:60008). May 15 11:54:47.175983 systemd-logind[1480]: Removed session 10. May 15 11:54:47.248387 sshd[4665]: Accepted publickey for core from 10.0.0.1 port 60008 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:47.250018 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:47.255008 systemd-logind[1480]: New session 11 of user core. May 15 11:54:47.265694 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 11:54:47.346506 containerd[1508]: time="2025-05-15T11:54:47.346167303Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:54:47.346903 containerd[1508]: time="2025-05-15T11:54:47.346876038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 11:54:47.350701 containerd[1508]: time="2025-05-15T11:54:47.350655600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 319.507608ms" May 15 11:54:47.350701 containerd[1508]: time="2025-05-15T11:54:47.350690395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 11:54:47.366266 containerd[1508]: time="2025-05-15T11:54:47.366221301Z" level=info msg="CreateContainer within sandbox \"4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 11:54:47.512014 sshd[4671]: Connection closed by 10.0.0.1 port 60008 May 15 11:54:47.512260 sshd-session[4665]: pam_unix(sshd:session): session closed for user core May 15 11:54:47.526824 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:60008.service: Deactivated successfully. May 15 11:54:47.531142 systemd[1]: session-11.scope: Deactivated successfully. May 15 11:54:47.532576 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. May 15 11:54:47.536063 systemd-logind[1480]: Removed session 11. May 15 11:54:47.538109 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:60022.service - OpenSSH per-connection server daemon (10.0.0.1:60022). May 15 11:54:47.591603 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 60022 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:47.592620 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:47.599076 containerd[1508]: time="2025-05-15T11:54:47.598659610Z" level=info msg="Container ad0c446deef6b37d18a41c2c15c8398b3855bfd79a86a3bb32d2a184ba143858: CDI devices from CRI Config.CDIDevices: []" May 15 11:54:47.601347 systemd-logind[1480]: New session 12 of user core. May 15 11:54:47.613898 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 11:54:47.619123 containerd[1508]: time="2025-05-15T11:54:47.618889942Z" level=info msg="CreateContainer within sandbox \"4ab48252aac6fbb6c934feb006d42193c76ff7008314ffb0737cb4fdc8b4a86f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ad0c446deef6b37d18a41c2c15c8398b3855bfd79a86a3bb32d2a184ba143858\"" May 15 11:54:47.621739 containerd[1508]: time="2025-05-15T11:54:47.621067860Z" level=info msg="StartContainer for \"ad0c446deef6b37d18a41c2c15c8398b3855bfd79a86a3bb32d2a184ba143858\"" May 15 11:54:47.622119 containerd[1508]: time="2025-05-15T11:54:47.622088310Z" level=info msg="connecting to shim ad0c446deef6b37d18a41c2c15c8398b3855bfd79a86a3bb32d2a184ba143858" address="unix:///run/containerd/s/ef6b21458a08dd1e36544380c2203d2ed480809e0a7ace9b8a89a5e931cdde0e" protocol=ttrpc version=3 May 15 11:54:47.653409 systemd[1]: Started cri-containerd-ad0c446deef6b37d18a41c2c15c8398b3855bfd79a86a3bb32d2a184ba143858.scope - libcontainer container ad0c446deef6b37d18a41c2c15c8398b3855bfd79a86a3bb32d2a184ba143858. May 15 11:54:47.728760 containerd[1508]: time="2025-05-15T11:54:47.728708562Z" level=info msg="StartContainer for \"ad0c446deef6b37d18a41c2c15c8398b3855bfd79a86a3bb32d2a184ba143858\" returns successfully" May 15 11:54:47.839101 sshd[4685]: Connection closed by 10.0.0.1 port 60022 May 15 11:54:47.840067 sshd-session[4682]: pam_unix(sshd:session): session closed for user core May 15 11:54:47.843391 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:60022.service: Deactivated successfully. May 15 11:54:47.845267 systemd[1]: session-12.scope: Deactivated successfully. May 15 11:54:47.848989 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. May 15 11:54:47.849938 systemd-logind[1480]: Removed session 12. May 15 11:54:48.158203 kubelet[2615]: I0515 11:54:48.158135 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x62nn" podStartSLOduration=26.387024371 podStartE2EDuration="31.158118226s" podCreationTimestamp="2025-05-15 11:54:17 +0000 UTC" firstStartedPulling="2025-05-15 11:54:40.231859127 +0000 UTC m=+36.422121499" lastFinishedPulling="2025-05-15 11:54:45.002952982 +0000 UTC m=+41.193215354" observedRunningTime="2025-05-15 11:54:46.085642123 +0000 UTC m=+42.275904495" watchObservedRunningTime="2025-05-15 11:54:48.158118226 +0000 UTC m=+44.348380598" May 15 11:54:48.158646 kubelet[2615]: I0515 11:54:48.158249 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76c5478b6-8djln" podStartSLOduration=28.028038214 podStartE2EDuration="32.158245728s" podCreationTimestamp="2025-05-15 11:54:16 +0000 UTC" firstStartedPulling="2025-05-15 11:54:43.221213493 +0000 UTC m=+39.411475865" lastFinishedPulling="2025-05-15 11:54:47.351421007 +0000 UTC m=+43.541683379" observedRunningTime="2025-05-15 11:54:48.157724043 +0000 UTC m=+44.347986494" watchObservedRunningTime="2025-05-15 11:54:48.158245728 +0000 UTC m=+44.348508140" May 15 11:54:48.244685 kubelet[2615]: I0515 11:54:48.244394 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76c5478b6-4kpt7" podStartSLOduration=27.374020614 podStartE2EDuration="32.244376764s" podCreationTimestamp="2025-05-15 11:54:16 +0000 UTC" firstStartedPulling="2025-05-15 11:54:42.159034221 +0000 UTC m=+38.349296593" lastFinishedPulling="2025-05-15 11:54:47.029390371 +0000 UTC m=+43.219652743" observedRunningTime="2025-05-15 11:54:48.228340378 +0000 UTC m=+44.418602750" watchObservedRunningTime="2025-05-15 11:54:48.244376764 +0000 UTC m=+44.434639136" May 15 11:54:52.158965 kubelet[2615]: I0515 11:54:52.158903 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:54:52.195982 containerd[1508]: time="2025-05-15T11:54:52.195933116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c\" id:\"43ecf772a83c3b100c08460dee4ca4a5dc1848bcff6605ffeb102babfdc7d96c\" pid:4760 exited_at:{seconds:1747310092 nanos:194086068}" May 15 11:54:52.232469 containerd[1508]: time="2025-05-15T11:54:52.232414638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe85f5dc455973382f1fd219eab6347eaba54c7fb170846773685aca170cb43c\" id:\"e06ec2bff790f2e6cde51fa9ec60354ffe2c1f88dfe8f0546f5cb9c44a8196eb\" pid:4782 exited_at:{seconds:1747310092 nanos:232023528}" May 15 11:54:52.858048 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:43648.service - OpenSSH per-connection server daemon (10.0.0.1:43648). May 15 11:54:52.916105 sshd[4793]: Accepted publickey for core from 10.0.0.1 port 43648 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:52.917678 sshd-session[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:52.922406 systemd-logind[1480]: New session 13 of user core. May 15 11:54:52.935668 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 11:54:53.104144 sshd[4795]: Connection closed by 10.0.0.1 port 43648 May 15 11:54:53.104520 sshd-session[4793]: pam_unix(sshd:session): session closed for user core May 15 11:54:53.118125 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:43648.service: Deactivated successfully. May 15 11:54:53.120399 systemd[1]: session-13.scope: Deactivated successfully. May 15 11:54:53.123148 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. May 15 11:54:53.126199 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:43658.service - OpenSSH per-connection server daemon (10.0.0.1:43658). May 15 11:54:53.127186 systemd-logind[1480]: Removed session 13. May 15 11:54:53.185293 sshd[4808]: Accepted publickey for core from 10.0.0.1 port 43658 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:53.186696 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:53.191210 systemd-logind[1480]: New session 14 of user core. May 15 11:54:53.201668 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 11:54:53.419160 sshd[4810]: Connection closed by 10.0.0.1 port 43658 May 15 11:54:53.419688 sshd-session[4808]: pam_unix(sshd:session): session closed for user core May 15 11:54:53.431345 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:43658.service: Deactivated successfully. May 15 11:54:53.434987 systemd[1]: session-14.scope: Deactivated successfully. May 15 11:54:53.435824 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. May 15 11:54:53.438861 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:43670.service - OpenSSH per-connection server daemon (10.0.0.1:43670). May 15 11:54:53.439351 systemd-logind[1480]: Removed session 14. May 15 11:54:53.496926 sshd[4829]: Accepted publickey for core from 10.0.0.1 port 43670 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:53.498080 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:53.501845 systemd-logind[1480]: New session 15 of user core. May 15 11:54:53.512640 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 11:54:54.291997 sshd[4831]: Connection closed by 10.0.0.1 port 43670 May 15 11:54:54.293885 sshd-session[4829]: pam_unix(sshd:session): session closed for user core May 15 11:54:54.302937 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:43670.service: Deactivated successfully. May 15 11:54:54.304468 systemd[1]: session-15.scope: Deactivated successfully. May 15 11:54:54.305307 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. May 15 11:54:54.313979 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:43674.service - OpenSSH per-connection server daemon (10.0.0.1:43674). May 15 11:54:54.315327 systemd-logind[1480]: Removed session 15. May 15 11:54:54.377522 sshd[4851]: Accepted publickey for core from 10.0.0.1 port 43674 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:54.378582 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:54.382935 systemd-logind[1480]: New session 16 of user core. May 15 11:54:54.393696 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 11:54:54.692584 sshd[4854]: Connection closed by 10.0.0.1 port 43674 May 15 11:54:54.694661 sshd-session[4851]: pam_unix(sshd:session): session closed for user core May 15 11:54:54.702730 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:43674.service: Deactivated successfully. May 15 11:54:54.705851 systemd[1]: session-16.scope: Deactivated successfully. May 15 11:54:54.707763 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. May 15 11:54:54.711061 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:43686.service - OpenSSH per-connection server daemon (10.0.0.1:43686). May 15 11:54:54.713587 systemd-logind[1480]: Removed session 16. May 15 11:54:54.771991 sshd[4866]: Accepted publickey for core from 10.0.0.1 port 43686 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:54:54.773406 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:54:54.779469 systemd-logind[1480]: New session 17 of user core. May 15 11:54:54.787697 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 11:54:54.932989 sshd[4868]: Connection closed by 10.0.0.1 port 43686 May 15 11:54:54.933289 sshd-session[4866]: pam_unix(sshd:session): session closed for user core May 15 11:54:54.936321 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:43686.service: Deactivated successfully. May 15 11:54:54.938441 systemd[1]: session-17.scope: Deactivated successfully. May 15 11:54:54.940860 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. May 15 11:54:54.942594 systemd-logind[1480]: Removed session 17. May 15 11:54:59.952882 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:43698.service - OpenSSH per-connection server daemon (10.0.0.1:43698). May 15 11:55:00.019734 sshd[4885]: Accepted publickey for core from 10.0.0.1 port 43698 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:55:00.021002 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:55:00.025532 systemd-logind[1480]: New session 18 of user core. May 15 11:55:00.034646 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 11:55:00.169549 sshd[4887]: Connection closed by 10.0.0.1 port 43698 May 15 11:55:00.169883 sshd-session[4885]: pam_unix(sshd:session): session closed for user core May 15 11:55:00.172750 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:43698.service: Deactivated successfully. May 15 11:55:00.174646 systemd[1]: session-18.scope: Deactivated successfully. May 15 11:55:00.175931 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. May 15 11:55:00.177443 systemd-logind[1480]: Removed session 18. May 15 11:55:05.181936 systemd[1]: Started sshd@18-10.0.0.31:22-10.0.0.1:45174.service - OpenSSH per-connection server daemon (10.0.0.1:45174). May 15 11:55:05.250318 sshd[4906]: Accepted publickey for core from 10.0.0.1 port 45174 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:55:05.251638 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:55:05.255862 systemd-logind[1480]: New session 19 of user core. May 15 11:55:05.264662 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 11:55:05.438017 sshd[4908]: Connection closed by 10.0.0.1 port 45174 May 15 11:55:05.438295 sshd-session[4906]: pam_unix(sshd:session): session closed for user core May 15 11:55:05.441244 systemd[1]: session-19.scope: Deactivated successfully. May 15 11:55:05.442822 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. May 15 11:55:05.442991 systemd[1]: sshd@18-10.0.0.31:22-10.0.0.1:45174.service: Deactivated successfully. May 15 11:55:05.446119 systemd-logind[1480]: Removed session 19. May 15 11:55:10.163795 containerd[1508]: time="2025-05-15T11:55:10.163734476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"295d3bf421a58d4f2f1608b0a9bc0d5dae3ed61511a6f9af8323f3a90414faeb\" id:\"095c58f538df49b3809c4f815a1e3a814efa7177d7d4a477c76e9f8b6d54e334\" pid:4932 exited_at:{seconds:1747310110 nanos:163399300}" May 15 11:55:10.452434 systemd[1]: Started sshd@19-10.0.0.31:22-10.0.0.1:45182.service - OpenSSH per-connection server daemon (10.0.0.1:45182). May 15 11:55:10.519328 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 45182 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 11:55:10.520747 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:55:10.525113 systemd-logind[1480]: New session 20 of user core. May 15 11:55:10.535703 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 11:55:10.714337 sshd[4948]: Connection closed by 10.0.0.1 port 45182 May 15 11:55:10.714638 sshd-session[4946]: pam_unix(sshd:session): session closed for user core May 15 11:55:10.718722 systemd[1]: sshd@19-10.0.0.31:22-10.0.0.1:45182.service: Deactivated successfully. May 15 11:55:10.721127 systemd[1]: session-20.scope: Deactivated successfully. May 15 11:55:10.722589 systemd-logind[1480]: Session 20 logged out. Waiting for processes to exit. May 15 11:55:10.724483 systemd-logind[1480]: Removed session 20.