May 16 09:39:02.807302 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 09:39:02.807331 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri May 16 08:35:42 -00 2025 May 16 09:39:02.807342 kernel: KASLR enabled May 16 09:39:02.807348 kernel: efi: EFI v2.7 by EDK II May 16 09:39:02.807354 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 16 09:39:02.807359 kernel: random: crng init done May 16 09:39:02.807366 kernel: secureboot: Secure boot disabled May 16 09:39:02.807371 kernel: ACPI: Early table checksum verification disabled May 16 09:39:02.807377 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 16 09:39:02.807384 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 09:39:02.807391 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807398 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807404 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807410 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807417 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807425 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807431 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807437 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807443 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:39:02.807449 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 09:39:02.807455 kernel: ACPI: Use ACPI SPCR as default console: Yes May 16 09:39:02.807461 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 09:39:02.807467 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 16 09:39:02.807473 kernel: Zone ranges: May 16 09:39:02.807479 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 09:39:02.807486 kernel: DMA32 empty May 16 09:39:02.807492 kernel: Normal empty May 16 09:39:02.807498 kernel: Device empty May 16 09:39:02.807503 kernel: Movable zone start for each node May 16 09:39:02.807509 kernel: Early memory node ranges May 16 09:39:02.807515 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 16 09:39:02.807521 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 16 09:39:02.807527 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 16 09:39:02.807533 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 16 09:39:02.807539 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 16 09:39:02.807545 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 16 09:39:02.807550 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 16 09:39:02.807558 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 16 09:39:02.807563 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 16 09:39:02.807569 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 16 09:39:02.807578 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 16 09:39:02.807594 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 16 09:39:02.807614 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 16 09:39:02.807623 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 09:39:02.807629 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 09:39:02.807636 kernel: psci: probing for conduit method from ACPI. May 16 09:39:02.807642 kernel: psci: PSCIv1.1 detected in firmware. May 16 09:39:02.807648 kernel: psci: Using standard PSCI v0.2 function IDs May 16 09:39:02.807654 kernel: psci: Trusted OS migration not required May 16 09:39:02.807661 kernel: psci: SMC Calling Convention v1.1 May 16 09:39:02.807667 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 09:39:02.807674 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 16 09:39:02.807680 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 16 09:39:02.807688 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 09:39:02.807694 kernel: Detected PIPT I-cache on CPU0 May 16 09:39:02.807700 kernel: CPU features: detected: GIC system register CPU interface May 16 09:39:02.807707 kernel: CPU features: detected: Spectre-v4 May 16 09:39:02.807713 kernel: CPU features: detected: Spectre-BHB May 16 09:39:02.807719 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 09:39:02.807726 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 09:39:02.807732 kernel: CPU features: detected: ARM erratum 1418040 May 16 09:39:02.807738 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 09:39:02.807744 kernel: alternatives: applying boot alternatives May 16 09:39:02.807752 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efb8cca3b981587a1314d5462995d10283ca386e95a1cc1f8f2d642520bcc17 May 16 09:39:02.807760 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 09:39:02.807773 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 09:39:02.807780 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 09:39:02.807786 kernel: Fallback order for Node 0: 0 May 16 09:39:02.807792 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 16 09:39:02.807799 kernel: Policy zone: DMA May 16 09:39:02.807805 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 09:39:02.807811 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 16 09:39:02.807818 kernel: software IO TLB: area num 4. May 16 09:39:02.807824 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 16 09:39:02.807831 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 16 09:39:02.807837 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 09:39:02.807845 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 09:39:02.807852 kernel: rcu: RCU event tracing is enabled. May 16 09:39:02.807859 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 09:39:02.807865 kernel: Trampoline variant of Tasks RCU enabled. May 16 09:39:02.807872 kernel: Tracing variant of Tasks RCU enabled. May 16 09:39:02.807878 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 09:39:02.807884 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 09:39:02.807891 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 09:39:02.807897 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 09:39:02.807904 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 09:39:02.807910 kernel: GICv3: 256 SPIs implemented May 16 09:39:02.807918 kernel: GICv3: 0 Extended SPIs implemented May 16 09:39:02.807924 kernel: Root IRQ handler: gic_handle_irq May 16 09:39:02.807931 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 16 09:39:02.807937 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 16 09:39:02.807943 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 09:39:02.807949 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 09:39:02.807956 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 16 09:39:02.807962 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 16 09:39:02.807969 kernel: GICv3: using LPI property table @0x0000000040100000 May 16 09:39:02.807975 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 16 09:39:02.807982 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 09:39:02.807989 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 09:39:02.807997 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 09:39:02.808004 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 09:39:02.808011 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 09:39:02.808017 kernel: arm-pv: using stolen time PV May 16 09:39:02.808024 kernel: Console: colour dummy device 80x25 May 16 09:39:02.808030 kernel: ACPI: Core revision 20240827 May 16 09:39:02.808037 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 09:39:02.808044 kernel: pid_max: default: 32768 minimum: 301 May 16 09:39:02.808054 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 09:39:02.808063 kernel: landlock: Up and running. May 16 09:39:02.808071 kernel: SELinux: Initializing. May 16 09:39:02.808079 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 09:39:02.808085 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 09:39:02.808092 kernel: rcu: Hierarchical SRCU implementation. May 16 09:39:02.808099 kernel: rcu: Max phase no-delay instances is 400. May 16 09:39:02.808106 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 09:39:02.808113 kernel: Remapping and enabling EFI services. May 16 09:39:02.808119 kernel: smp: Bringing up secondary CPUs ... May 16 09:39:02.808126 kernel: Detected PIPT I-cache on CPU1 May 16 09:39:02.808138 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 09:39:02.808145 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 16 09:39:02.808153 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 09:39:02.808160 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 09:39:02.808167 kernel: Detected PIPT I-cache on CPU2 May 16 09:39:02.808174 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 09:39:02.808181 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 16 09:39:02.808189 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 09:39:02.808196 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 09:39:02.808203 kernel: Detected PIPT I-cache on CPU3 May 16 09:39:02.808210 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 09:39:02.808221 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 16 09:39:02.808227 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 09:39:02.808234 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 09:39:02.808241 kernel: smp: Brought up 1 node, 4 CPUs May 16 09:39:02.808248 kernel: SMP: Total of 4 processors activated. May 16 09:39:02.808255 kernel: CPU: All CPU(s) started at EL1 May 16 09:39:02.808263 kernel: CPU features: detected: 32-bit EL0 Support May 16 09:39:02.808270 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 09:39:02.808277 kernel: CPU features: detected: Common not Private translations May 16 09:39:02.808284 kernel: CPU features: detected: CRC32 instructions May 16 09:39:02.808291 kernel: CPU features: detected: Enhanced Virtualization Traps May 16 09:39:02.808297 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 09:39:02.808304 kernel: CPU features: detected: LSE atomic instructions May 16 09:39:02.808311 kernel: CPU features: detected: Privileged Access Never May 16 09:39:02.808318 kernel: CPU features: detected: RAS Extension Support May 16 09:39:02.808326 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 09:39:02.808333 kernel: alternatives: applying system-wide alternatives May 16 09:39:02.808340 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 16 09:39:02.808348 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 16 09:39:02.808355 kernel: devtmpfs: initialized May 16 09:39:02.808362 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 09:39:02.808369 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 09:39:02.808376 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 16 09:39:02.808383 kernel: 0 pages in range for non-PLT usage May 16 09:39:02.808391 kernel: 508544 pages in range for PLT usage May 16 09:39:02.808398 kernel: pinctrl core: initialized pinctrl subsystem May 16 09:39:02.808404 kernel: SMBIOS 3.0.0 present. May 16 09:39:02.808411 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 16 09:39:02.808418 kernel: DMI: Memory slots populated: 1/1 May 16 09:39:02.808425 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 09:39:02.808432 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 09:39:02.808439 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 09:39:02.808446 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 09:39:02.808454 kernel: audit: initializing netlink subsys (disabled) May 16 09:39:02.808461 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 May 16 09:39:02.808468 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 09:39:02.808474 kernel: cpuidle: using governor menu May 16 09:39:02.808481 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 09:39:02.808488 kernel: ASID allocator initialised with 32768 entries May 16 09:39:02.808495 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 09:39:02.808502 kernel: Serial: AMBA PL011 UART driver May 16 09:39:02.808508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 09:39:02.808516 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 16 09:39:02.808523 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 16 09:39:02.808530 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 16 09:39:02.808537 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 09:39:02.808544 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 16 09:39:02.808551 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 16 09:39:02.808557 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 16 09:39:02.808564 kernel: ACPI: Added _OSI(Module Device) May 16 09:39:02.808571 kernel: ACPI: Added _OSI(Processor Device) May 16 09:39:02.808632 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 09:39:02.808643 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 09:39:02.808650 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 09:39:02.808657 kernel: ACPI: Interpreter enabled May 16 09:39:02.808664 kernel: ACPI: Using GIC for interrupt routing May 16 09:39:02.808670 kernel: ACPI: MCFG table detected, 1 entries May 16 09:39:02.808677 kernel: ACPI: CPU0 has been hot-added May 16 09:39:02.808684 kernel: ACPI: CPU1 has been hot-added May 16 09:39:02.808691 kernel: ACPI: CPU2 has been hot-added May 16 09:39:02.808698 kernel: ACPI: CPU3 has been hot-added May 16 09:39:02.808707 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 09:39:02.808714 kernel: printk: legacy console [ttyAMA0] enabled May 16 09:39:02.808721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 09:39:02.808888 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 09:39:02.808960 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 09:39:02.809018 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 09:39:02.809076 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 09:39:02.809134 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 09:39:02.809143 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 09:39:02.809150 kernel: PCI host bridge to bus 0000:00 May 16 09:39:02.809214 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 09:39:02.809269 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 09:39:02.809321 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 09:39:02.809372 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 09:39:02.809459 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 16 09:39:02.809527 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 09:39:02.809614 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 16 09:39:02.809679 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 16 09:39:02.809740 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 16 09:39:02.809808 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 16 09:39:02.809869 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 16 09:39:02.809931 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 16 09:39:02.809987 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 09:39:02.810038 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 09:39:02.810089 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 09:39:02.810098 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 09:39:02.810105 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 09:39:02.810112 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 09:39:02.810121 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 09:39:02.810128 kernel: iommu: Default domain type: Translated May 16 09:39:02.810135 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 09:39:02.810142 kernel: efivars: Registered efivars operations May 16 09:39:02.810150 kernel: vgaarb: loaded May 16 09:39:02.810157 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 09:39:02.810164 kernel: VFS: Disk quotas dquot_6.6.0 May 16 09:39:02.810171 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 09:39:02.810178 kernel: pnp: PnP ACPI init May 16 09:39:02.810248 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 09:39:02.810258 kernel: pnp: PnP ACPI: found 1 devices May 16 09:39:02.810265 kernel: NET: Registered PF_INET protocol family May 16 09:39:02.810272 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 09:39:02.810279 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 09:39:02.810286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 09:39:02.810293 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 09:39:02.810300 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 09:39:02.810309 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 09:39:02.810316 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 09:39:02.810323 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 09:39:02.810330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 09:39:02.810336 kernel: PCI: CLS 0 bytes, default 64 May 16 09:39:02.810343 kernel: kvm [1]: HYP mode not available May 16 09:39:02.810350 kernel: Initialise system trusted keyrings May 16 09:39:02.810357 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 09:39:02.810364 kernel: Key type asymmetric registered May 16 09:39:02.810372 kernel: Asymmetric key parser 'x509' registered May 16 09:39:02.810379 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 09:39:02.810386 kernel: io scheduler mq-deadline registered May 16 09:39:02.810393 kernel: io scheduler kyber registered May 16 09:39:02.810400 kernel: io scheduler bfq registered May 16 09:39:02.810407 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 09:39:02.810414 kernel: ACPI: button: Power Button [PWRB] May 16 09:39:02.810421 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 09:39:02.810480 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 09:39:02.810490 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 09:39:02.810497 kernel: thunder_xcv, ver 1.0 May 16 09:39:02.810504 kernel: thunder_bgx, ver 1.0 May 16 09:39:02.810511 kernel: nicpf, ver 1.0 May 16 09:39:02.810518 kernel: nicvf, ver 1.0 May 16 09:39:02.810605 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 09:39:02.810671 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T09:39:02 UTC (1747388342) May 16 09:39:02.810680 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 09:39:02.810690 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 16 09:39:02.810697 kernel: watchdog: NMI not fully supported May 16 09:39:02.810704 kernel: watchdog: Hard watchdog permanently disabled May 16 09:39:02.810711 kernel: NET: Registered PF_INET6 protocol family May 16 09:39:02.810718 kernel: Segment Routing with IPv6 May 16 09:39:02.810725 kernel: In-situ OAM (IOAM) with IPv6 May 16 09:39:02.810732 kernel: NET: Registered PF_PACKET protocol family May 16 09:39:02.810739 kernel: Key type dns_resolver registered May 16 09:39:02.810746 kernel: registered taskstats version 1 May 16 09:39:02.810753 kernel: Loading compiled-in X.509 certificates May 16 09:39:02.810761 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: e7b097e50e016e102bfdd733c3ddebaed9ee0e35' May 16 09:39:02.810777 kernel: Demotion targets for Node 0: null May 16 09:39:02.810784 kernel: Key type .fscrypt registered May 16 09:39:02.810791 kernel: Key type fscrypt-provisioning registered May 16 09:39:02.810798 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 09:39:02.810805 kernel: ima: Allocated hash algorithm: sha1 May 16 09:39:02.810811 kernel: ima: No architecture policies found May 16 09:39:02.810818 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 09:39:02.810827 kernel: clk: Disabling unused clocks May 16 09:39:02.810835 kernel: PM: genpd: Disabling unused power domains May 16 09:39:02.810841 kernel: Warning: unable to open an initial console. May 16 09:39:02.810848 kernel: Freeing unused kernel memory: 39424K May 16 09:39:02.810855 kernel: Run /init as init process May 16 09:39:02.810862 kernel: with arguments: May 16 09:39:02.810869 kernel: /init May 16 09:39:02.810876 kernel: with environment: May 16 09:39:02.810882 kernel: HOME=/ May 16 09:39:02.810891 kernel: TERM=linux May 16 09:39:02.810897 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 09:39:02.810905 systemd[1]: Successfully made /usr/ read-only. May 16 09:39:02.810915 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 09:39:02.810923 systemd[1]: Detected virtualization kvm. May 16 09:39:02.810930 systemd[1]: Detected architecture arm64. May 16 09:39:02.810937 systemd[1]: Running in initrd. May 16 09:39:02.810945 systemd[1]: No hostname configured, using default hostname. May 16 09:39:02.810954 systemd[1]: Hostname set to . May 16 09:39:02.810961 systemd[1]: Initializing machine ID from VM UUID. May 16 09:39:02.810968 systemd[1]: Queued start job for default target initrd.target. May 16 09:39:02.810976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 09:39:02.810984 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 09:39:02.810991 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 09:39:02.810999 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 09:39:02.811007 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 09:39:02.811017 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 09:39:02.811025 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 09:39:02.811033 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 09:39:02.811040 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 09:39:02.811052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 09:39:02.811063 systemd[1]: Reached target paths.target - Path Units. May 16 09:39:02.811071 systemd[1]: Reached target slices.target - Slice Units. May 16 09:39:02.811080 systemd[1]: Reached target swap.target - Swaps. May 16 09:39:02.811087 systemd[1]: Reached target timers.target - Timer Units. May 16 09:39:02.811095 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 09:39:02.811102 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 09:39:02.811110 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 09:39:02.811118 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 09:39:02.811126 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 09:39:02.811134 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 09:39:02.811144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 09:39:02.811152 systemd[1]: Reached target sockets.target - Socket Units. May 16 09:39:02.811160 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 09:39:02.811167 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 09:39:02.811174 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 09:39:02.811183 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 09:39:02.811190 systemd[1]: Starting systemd-fsck-usr.service... May 16 09:39:02.811198 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 09:39:02.811205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 09:39:02.811214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 09:39:02.811222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 09:39:02.811230 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 09:39:02.811237 systemd[1]: Finished systemd-fsck-usr.service. May 16 09:39:02.811247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 09:39:02.811272 systemd-journald[244]: Collecting audit messages is disabled. May 16 09:39:02.811292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:39:02.811300 systemd-journald[244]: Journal started May 16 09:39:02.811320 systemd-journald[244]: Runtime Journal (/run/log/journal/ba9fa96852254642842b0b76727ffaa5) is 6M, max 48.5M, 42.4M free. May 16 09:39:02.801292 systemd-modules-load[245]: Inserted module 'overlay' May 16 09:39:02.817291 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 09:39:02.820031 systemd[1]: Started systemd-journald.service - Journal Service. May 16 09:39:02.823624 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 09:39:02.822705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 09:39:02.826823 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 09:39:02.828626 systemd-modules-load[245]: Inserted module 'br_netfilter' May 16 09:39:02.830486 kernel: Bridge firewalling registered May 16 09:39:02.830179 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 09:39:02.832710 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 09:39:02.832931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 09:39:02.836571 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 09:39:02.838678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 09:39:02.849059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 09:39:02.851238 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 09:39:02.857945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 09:39:02.859339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 09:39:02.862901 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 09:39:02.869718 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efb8cca3b981587a1314d5462995d10283ca386e95a1cc1f8f2d642520bcc17 May 16 09:39:02.900213 systemd-resolved[294]: Positive Trust Anchors: May 16 09:39:02.900229 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 09:39:02.900265 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 09:39:02.906010 systemd-resolved[294]: Defaulting to hostname 'linux'. May 16 09:39:02.907020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 09:39:02.910972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 09:39:02.950619 kernel: SCSI subsystem initialized May 16 09:39:02.955617 kernel: Loading iSCSI transport class v2.0-870. May 16 09:39:02.963620 kernel: iscsi: registered transport (tcp) May 16 09:39:02.975627 kernel: iscsi: registered transport (qla4xxx) May 16 09:39:02.975652 kernel: QLogic iSCSI HBA Driver May 16 09:39:02.991400 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 09:39:03.022671 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 09:39:03.024965 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 09:39:03.071080 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 09:39:03.073479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 09:39:03.136608 kernel: raid6: neonx8 gen() 13937 MB/s May 16 09:39:03.153601 kernel: raid6: neonx4 gen() 15791 MB/s May 16 09:39:03.170600 kernel: raid6: neonx2 gen() 13163 MB/s May 16 09:39:03.187595 kernel: raid6: neonx1 gen() 10407 MB/s May 16 09:39:03.204597 kernel: raid6: int64x8 gen() 6893 MB/s May 16 09:39:03.221596 kernel: raid6: int64x4 gen() 7343 MB/s May 16 09:39:03.238597 kernel: raid6: int64x2 gen() 6084 MB/s May 16 09:39:03.255594 kernel: raid6: int64x1 gen() 5031 MB/s May 16 09:39:03.255612 kernel: raid6: using algorithm neonx4 gen() 15791 MB/s May 16 09:39:03.272605 kernel: raid6: .... xor() 12298 MB/s, rmw enabled May 16 09:39:03.272620 kernel: raid6: using neon recovery algorithm May 16 09:39:03.277731 kernel: xor: measuring software checksum speed May 16 09:39:03.277780 kernel: 8regs : 21630 MB/sec May 16 09:39:03.278742 kernel: 32regs : 20702 MB/sec May 16 09:39:03.278755 kernel: arm64_neon : 27441 MB/sec May 16 09:39:03.278768 kernel: xor: using function: arm64_neon (27441 MB/sec) May 16 09:39:03.334785 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 09:39:03.340912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 09:39:03.343372 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 09:39:03.377133 systemd-udevd[499]: Using default interface naming scheme 'v255'. May 16 09:39:03.381223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 09:39:03.383694 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 09:39:03.412734 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation May 16 09:39:03.435298 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 09:39:03.437696 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 09:39:03.491726 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 09:39:03.495223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 09:39:03.540696 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 16 09:39:03.547805 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 09:39:03.548081 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 09:39:03.548094 kernel: GPT:9289727 != 19775487 May 16 09:39:03.548103 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 09:39:03.548111 kernel: GPT:9289727 != 19775487 May 16 09:39:03.548119 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 09:39:03.548128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 09:39:03.545344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 09:39:03.545459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:39:03.548007 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 09:39:03.549705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 09:39:03.576161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:39:03.584782 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 09:39:03.592791 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 09:39:03.593878 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 09:39:03.608291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 09:39:03.615058 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 09:39:03.616236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 09:39:03.618567 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 09:39:03.621491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 09:39:03.623541 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 09:39:03.626192 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 09:39:03.627978 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 09:39:03.644158 disk-uuid[590]: Primary Header is updated. May 16 09:39:03.644158 disk-uuid[590]: Secondary Entries is updated. May 16 09:39:03.644158 disk-uuid[590]: Secondary Header is updated. May 16 09:39:03.646800 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 09:39:03.649295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 09:39:04.659613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 09:39:04.660251 disk-uuid[595]: The operation has completed successfully. May 16 09:39:04.687087 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 09:39:04.687206 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 09:39:04.719866 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 09:39:04.749102 sh[610]: Success May 16 09:39:04.765076 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 09:39:04.765116 kernel: device-mapper: uevent: version 1.0.3 May 16 09:39:04.765136 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 09:39:04.776601 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 16 09:39:04.805876 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 09:39:04.808738 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 09:39:04.823736 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 09:39:04.830685 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 09:39:04.830716 kernel: BTRFS: device fsid 9108ecbf-b780-4a5b-b31c-dcb97545c897 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (622) May 16 09:39:04.831999 kernel: BTRFS info (device dm-0): first mount of filesystem 9108ecbf-b780-4a5b-b31c-dcb97545c897 May 16 09:39:04.832708 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 16 09:39:04.832722 kernel: BTRFS info (device dm-0): using free-space-tree May 16 09:39:04.836512 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 09:39:04.837770 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 09:39:04.838963 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 09:39:04.839686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 09:39:04.841180 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 09:39:04.869432 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (654) May 16 09:39:04.869487 kernel: BTRFS info (device vda6): first mount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:39:04.869498 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 09:39:04.870193 kernel: BTRFS info (device vda6): using free-space-tree May 16 09:39:04.876600 kernel: BTRFS info (device vda6): last unmount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:39:04.878599 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 09:39:04.880475 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 09:39:04.945290 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 09:39:04.948065 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 09:39:04.989597 systemd-networkd[795]: lo: Link UP May 16 09:39:04.989607 systemd-networkd[795]: lo: Gained carrier May 16 09:39:04.990394 systemd-networkd[795]: Enumeration completed May 16 09:39:04.990475 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 09:39:04.991146 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 09:39:04.991150 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 09:39:04.991644 systemd[1]: Reached target network.target - Network. May 16 09:39:04.994739 systemd-networkd[795]: eth0: Link UP May 16 09:39:04.994742 systemd-networkd[795]: eth0: Gained carrier May 16 09:39:04.994757 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 09:39:05.016653 systemd-networkd[795]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 09:39:05.029858 ignition[700]: Ignition 2.21.0 May 16 09:39:05.029872 ignition[700]: Stage: fetch-offline May 16 09:39:05.029898 ignition[700]: no configs at "/usr/lib/ignition/base.d" May 16 09:39:05.029906 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:39:05.030079 ignition[700]: parsed url from cmdline: "" May 16 09:39:05.030082 ignition[700]: no config URL provided May 16 09:39:05.030087 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" May 16 09:39:05.030092 ignition[700]: no config at "/usr/lib/ignition/user.ign" May 16 09:39:05.030110 ignition[700]: op(1): [started] loading QEMU firmware config module May 16 09:39:05.030114 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 09:39:05.039440 ignition[700]: op(1): [finished] loading QEMU firmware config module May 16 09:39:05.076512 ignition[700]: parsing config with SHA512: a5077f35af787e794e0b6909742f3f3865022b2ed88c41ac262b5d14fd979586fe7690b34578a2c7198478ea7f11c59236958b7d3d43606bfadc9067c17f867e May 16 09:39:05.080312 unknown[700]: fetched base config from "system" May 16 09:39:05.080324 unknown[700]: fetched user config from "qemu" May 16 09:39:05.080712 ignition[700]: fetch-offline: fetch-offline passed May 16 09:39:05.080779 ignition[700]: Ignition finished successfully May 16 09:39:05.084006 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 09:39:05.085559 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 09:39:05.087440 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 09:39:05.115371 ignition[809]: Ignition 2.21.0 May 16 09:39:05.115388 ignition[809]: Stage: kargs May 16 09:39:05.115511 ignition[809]: no configs at "/usr/lib/ignition/base.d" May 16 09:39:05.115519 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:39:05.117392 ignition[809]: kargs: kargs passed May 16 09:39:05.117437 ignition[809]: Ignition finished successfully May 16 09:39:05.119351 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 09:39:05.121419 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 09:39:05.144115 ignition[817]: Ignition 2.21.0 May 16 09:39:05.144129 ignition[817]: Stage: disks May 16 09:39:05.144254 ignition[817]: no configs at "/usr/lib/ignition/base.d" May 16 09:39:05.144263 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:39:05.145529 ignition[817]: disks: disks passed May 16 09:39:05.147150 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 09:39:05.145596 ignition[817]: Ignition finished successfully May 16 09:39:05.148400 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 09:39:05.149744 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 09:39:05.151584 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 09:39:05.153114 systemd[1]: Reached target sysinit.target - System Initialization. May 16 09:39:05.154972 systemd[1]: Reached target basic.target - Basic System. May 16 09:39:05.157536 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 09:39:05.188131 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 09:39:05.192946 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 09:39:05.195778 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 09:39:05.262603 kernel: EXT4-fs (vda9): mounted filesystem a09a4a8b-405d-466b-850e-ba0196efa117 r/w with ordered data mode. Quota mode: none. May 16 09:39:05.262801 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 09:39:05.263810 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 09:39:05.265943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 09:39:05.267465 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 09:39:05.268479 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 09:39:05.268520 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 09:39:05.268543 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 09:39:05.279985 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 09:39:05.282615 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (835) May 16 09:39:05.282083 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 09:39:05.286609 kernel: BTRFS info (device vda6): first mount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:39:05.286628 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 09:39:05.286637 kernel: BTRFS info (device vda6): using free-space-tree May 16 09:39:05.289483 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 09:39:05.323215 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory May 16 09:39:05.326913 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory May 16 09:39:05.330562 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory May 16 09:39:05.334247 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory May 16 09:39:05.402734 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 09:39:05.406657 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 09:39:05.408145 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 09:39:05.428607 kernel: BTRFS info (device vda6): last unmount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:39:05.441985 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 09:39:05.449515 ignition[952]: INFO : Ignition 2.21.0 May 16 09:39:05.449515 ignition[952]: INFO : Stage: mount May 16 09:39:05.451284 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 09:39:05.451284 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:39:05.454716 ignition[952]: INFO : mount: mount passed May 16 09:39:05.454716 ignition[952]: INFO : Ignition finished successfully May 16 09:39:05.455321 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 09:39:05.458214 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 09:39:05.830124 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 09:39:05.831575 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 09:39:05.847601 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (964) May 16 09:39:05.849893 kernel: BTRFS info (device vda6): first mount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:39:05.849912 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 09:39:05.849922 kernel: BTRFS info (device vda6): using free-space-tree May 16 09:39:05.853669 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 09:39:05.878142 ignition[981]: INFO : Ignition 2.21.0 May 16 09:39:05.878142 ignition[981]: INFO : Stage: files May 16 09:39:05.880161 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 09:39:05.880161 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:39:05.882192 ignition[981]: DEBUG : files: compiled without relabeling support, skipping May 16 09:39:05.883608 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 09:39:05.883608 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 09:39:05.886527 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 09:39:05.887873 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 09:39:05.887873 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 09:39:05.887075 unknown[981]: wrote ssh authorized keys file for user: core May 16 09:39:05.891527 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 09:39:05.891527 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 16 09:39:05.978258 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 09:39:06.629051 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 09:39:06.631024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 09:39:06.631024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 09:39:06.631024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 09:39:06.631024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 09:39:06.631024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 09:39:06.631024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 09:39:06.631024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 09:39:06.631024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 09:39:06.644515 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 09:39:06.644515 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 09:39:06.644515 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 16 09:39:06.644515 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 16 09:39:06.644515 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 16 09:39:06.644515 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 16 09:39:06.684823 systemd-networkd[795]: eth0: Gained IPv6LL May 16 09:39:07.047776 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 09:39:07.649433 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 16 09:39:07.649433 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 09:39:07.653502 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 09:39:07.653502 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 09:39:07.653502 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 09:39:07.653502 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 16 09:39:07.653502 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 09:39:07.653502 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 09:39:07.653502 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 16 09:39:07.653502 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 16 09:39:07.673516 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 09:39:07.676387 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 09:39:07.679225 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 16 09:39:07.679225 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 16 09:39:07.679225 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 16 09:39:07.679225 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 09:39:07.679225 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 09:39:07.679225 ignition[981]: INFO : files: files passed May 16 09:39:07.679225 ignition[981]: INFO : Ignition finished successfully May 16 09:39:07.679673 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 09:39:07.683016 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 09:39:07.685880 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 09:39:07.703862 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 09:39:07.703950 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 09:39:07.707193 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory May 16 09:39:07.710372 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 09:39:07.710372 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 09:39:07.713638 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 09:39:07.714449 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 09:39:07.716525 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 09:39:07.719625 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 09:39:07.765788 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 09:39:07.766542 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 09:39:07.768080 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 09:39:07.771439 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 09:39:07.773407 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 09:39:07.774195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 09:39:07.798973 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 09:39:07.801113 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 09:39:07.820170 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 09:39:07.821465 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 09:39:07.823477 systemd[1]: Stopped target timers.target - Timer Units. May 16 09:39:07.825210 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 09:39:07.825328 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 09:39:07.827747 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 09:39:07.829665 systemd[1]: Stopped target basic.target - Basic System. May 16 09:39:07.831266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 09:39:07.832918 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 09:39:07.834774 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 09:39:07.836672 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 09:39:07.838551 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 09:39:07.840441 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 09:39:07.842363 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 09:39:07.844306 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 09:39:07.845982 systemd[1]: Stopped target swap.target - Swaps. May 16 09:39:07.847422 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 09:39:07.847542 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 09:39:07.849789 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 09:39:07.851673 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 09:39:07.853567 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 09:39:07.854686 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 09:39:07.856621 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 09:39:07.856753 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 09:39:07.859337 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 09:39:07.859451 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 09:39:07.861700 systemd[1]: Stopped target paths.target - Path Units. May 16 09:39:07.863171 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 09:39:07.866638 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 09:39:07.867882 systemd[1]: Stopped target slices.target - Slice Units. May 16 09:39:07.869909 systemd[1]: Stopped target sockets.target - Socket Units. May 16 09:39:07.871417 systemd[1]: iscsid.socket: Deactivated successfully. May 16 09:39:07.871495 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 09:39:07.872999 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 09:39:07.873076 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 09:39:07.874578 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 09:39:07.874710 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 09:39:07.876449 systemd[1]: ignition-files.service: Deactivated successfully. May 16 09:39:07.876551 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 09:39:07.878812 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 09:39:07.880514 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 09:39:07.880667 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 09:39:07.893154 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 09:39:07.893989 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 09:39:07.894126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 09:39:07.895909 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 09:39:07.896009 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 09:39:07.901789 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 09:39:07.901877 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 09:39:07.905853 ignition[1037]: INFO : Ignition 2.21.0 May 16 09:39:07.905853 ignition[1037]: INFO : Stage: umount May 16 09:39:07.907420 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 09:39:07.907420 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:39:07.907420 ignition[1037]: INFO : umount: umount passed May 16 09:39:07.907420 ignition[1037]: INFO : Ignition finished successfully May 16 09:39:07.908464 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 09:39:07.908942 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 09:39:07.909021 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 09:39:07.911019 systemd[1]: Stopped target network.target - Network. May 16 09:39:07.915083 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 09:39:07.915157 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 09:39:07.916794 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 09:39:07.916842 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 09:39:07.918414 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 09:39:07.918463 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 09:39:07.920120 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 09:39:07.920170 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 09:39:07.921945 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 09:39:07.923496 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 09:39:07.925461 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 09:39:07.925596 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 09:39:07.927360 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 09:39:07.927448 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 09:39:07.932821 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 09:39:07.932906 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 09:39:07.937719 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 09:39:07.937932 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 09:39:07.938033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 09:39:07.942354 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 09:39:07.943178 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 09:39:07.944436 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 09:39:07.944472 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 09:39:07.948383 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 09:39:07.949324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 09:39:07.949380 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 09:39:07.951518 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 09:39:07.951563 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 09:39:07.954495 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 09:39:07.954536 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 09:39:07.956388 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 09:39:07.956428 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 09:39:07.959256 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 09:39:07.962907 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 09:39:07.962970 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 09:39:07.972942 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 09:39:07.973031 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 09:39:07.977115 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 09:39:07.977239 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 09:39:07.979287 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 09:39:07.979322 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 09:39:07.981141 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 09:39:07.981169 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 09:39:07.982914 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 09:39:07.982956 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 09:39:07.985523 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 09:39:07.985567 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 09:39:07.988191 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 09:39:07.988239 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 09:39:07.991623 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 09:39:07.992719 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 09:39:07.992783 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 09:39:07.995765 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 09:39:07.995807 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 09:39:07.998844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 09:39:07.998883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:39:08.002999 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 16 09:39:08.003048 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 09:39:08.003079 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 09:39:08.011241 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 09:39:08.011360 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 09:39:08.013520 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 09:39:08.015919 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 09:39:08.044268 systemd[1]: Switching root. May 16 09:39:08.066745 systemd-journald[244]: Journal stopped May 16 09:39:08.815637 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 16 09:39:08.815686 kernel: SELinux: policy capability network_peer_controls=1 May 16 09:39:08.815700 kernel: SELinux: policy capability open_perms=1 May 16 09:39:08.815713 kernel: SELinux: policy capability extended_socket_class=1 May 16 09:39:08.815732 kernel: SELinux: policy capability always_check_network=0 May 16 09:39:08.815748 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 09:39:08.815759 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 09:39:08.815768 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 09:39:08.815777 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 09:39:08.815786 kernel: SELinux: policy capability userspace_initial_context=0 May 16 09:39:08.815795 kernel: audit: type=1403 audit(1747388348.233:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 09:39:08.815812 systemd[1]: Successfully loaded SELinux policy in 43.262ms. May 16 09:39:08.815833 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.182ms. May 16 09:39:08.815845 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 09:39:08.815856 systemd[1]: Detected virtualization kvm. May 16 09:39:08.815866 systemd[1]: Detected architecture arm64. May 16 09:39:08.815876 systemd[1]: Detected first boot. May 16 09:39:08.815886 systemd[1]: Initializing machine ID from VM UUID. May 16 09:39:08.815896 zram_generator::config[1081]: No configuration found. May 16 09:39:08.815906 kernel: NET: Registered PF_VSOCK protocol family May 16 09:39:08.815915 systemd[1]: Populated /etc with preset unit settings. May 16 09:39:08.815927 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 09:39:08.815938 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 09:39:08.815948 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 09:39:08.815958 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 09:39:08.815968 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 09:39:08.815977 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 09:39:08.815987 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 09:39:08.815997 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 09:39:08.816007 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 09:39:08.816019 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 09:39:08.816029 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 09:39:08.816039 systemd[1]: Created slice user.slice - User and Session Slice. May 16 09:39:08.816049 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 09:39:08.816060 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 09:39:08.816070 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 09:39:08.816084 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 09:39:08.816094 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 09:39:08.816106 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 09:39:08.816116 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 16 09:39:08.816127 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 09:39:08.816137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 09:39:08.816147 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 09:39:08.816157 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 09:39:08.816166 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 09:39:08.816176 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 09:39:08.816188 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 09:39:08.816198 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 09:39:08.816208 systemd[1]: Reached target slices.target - Slice Units. May 16 09:39:08.816218 systemd[1]: Reached target swap.target - Swaps. May 16 09:39:08.816228 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 09:39:08.816238 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 09:39:08.816248 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 09:39:08.816258 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 09:39:08.816268 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 09:39:08.816278 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 09:39:08.816290 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 09:39:08.816300 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 09:39:08.816310 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 09:39:08.816320 systemd[1]: Mounting media.mount - External Media Directory... May 16 09:39:08.816330 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 09:39:08.816340 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 09:39:08.816350 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 09:39:08.816361 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 09:39:08.816372 systemd[1]: Reached target machines.target - Containers. May 16 09:39:08.816383 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 09:39:08.816393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 09:39:08.816403 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 09:39:08.816412 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 09:39:08.816423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 09:39:08.816432 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 09:39:08.816442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 09:39:08.816452 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 09:39:08.816463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 09:39:08.816473 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 09:39:08.816483 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 09:39:08.816493 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 09:39:08.816503 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 09:39:08.816512 systemd[1]: Stopped systemd-fsck-usr.service. May 16 09:39:08.816522 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 09:39:08.816532 kernel: fuse: init (API version 7.41) May 16 09:39:08.816543 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 09:39:08.816552 kernel: loop: module loaded May 16 09:39:08.816563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 09:39:08.816574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 09:39:08.816599 kernel: ACPI: bus type drm_connector registered May 16 09:39:08.816611 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 09:39:08.816621 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 09:39:08.816632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 09:39:08.816644 systemd[1]: verity-setup.service: Deactivated successfully. May 16 09:39:08.816654 systemd[1]: Stopped verity-setup.service. May 16 09:39:08.816665 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 09:39:08.816675 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 09:39:08.816684 systemd[1]: Mounted media.mount - External Media Directory. May 16 09:39:08.816694 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 09:39:08.816705 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 09:39:08.816715 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 09:39:08.816734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 09:39:08.816745 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 09:39:08.816777 systemd-journald[1142]: Collecting audit messages is disabled. May 16 09:39:08.816802 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 09:39:08.816812 systemd-journald[1142]: Journal started May 16 09:39:08.816832 systemd-journald[1142]: Runtime Journal (/run/log/journal/ba9fa96852254642842b0b76727ffaa5) is 6M, max 48.5M, 42.4M free. May 16 09:39:08.606997 systemd[1]: Queued start job for default target multi-user.target. May 16 09:39:08.625496 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 09:39:08.625904 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 09:39:08.819117 systemd[1]: Started systemd-journald.service - Journal Service. May 16 09:39:08.819860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 09:39:08.820027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 09:39:08.821389 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 09:39:08.821535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 09:39:08.823508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 09:39:08.823702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 09:39:08.825123 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 09:39:08.825287 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 09:39:08.826658 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 09:39:08.826839 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 09:39:08.828852 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 09:39:08.831111 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 09:39:08.832694 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 09:39:08.834361 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 09:39:08.845926 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 09:39:08.848570 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 09:39:08.852413 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 09:39:08.853543 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 09:39:08.853577 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 09:39:08.855412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 09:39:08.864338 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 09:39:08.865490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 09:39:08.866770 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 09:39:08.868562 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 09:39:08.869773 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 09:39:08.870697 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 09:39:08.871995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 09:39:08.874354 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 09:39:08.877694 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 09:39:08.878264 systemd-journald[1142]: Time spent on flushing to /var/log/journal/ba9fa96852254642842b0b76727ffaa5 is 17.170ms for 883 entries. May 16 09:39:08.878264 systemd-journald[1142]: System Journal (/var/log/journal/ba9fa96852254642842b0b76727ffaa5) is 8M, max 195.6M, 187.6M free. May 16 09:39:08.916960 systemd-journald[1142]: Received client request to flush runtime journal. May 16 09:39:08.917015 kernel: loop0: detected capacity change from 0 to 138376 May 16 09:39:08.917030 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 09:39:08.881624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 09:39:08.882981 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 09:39:08.884217 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 09:39:08.900874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 09:39:08.902564 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 09:39:08.905913 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 09:39:08.907747 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 09:39:08.910839 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 09:39:08.914728 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 09:39:08.923405 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 09:39:08.935075 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 09:39:08.939651 kernel: loop1: detected capacity change from 0 to 189592 May 16 09:39:08.955083 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 09:39:08.959200 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 09:39:08.959616 kernel: loop2: detected capacity change from 0 to 107312 May 16 09:39:08.981610 kernel: loop3: detected capacity change from 0 to 138376 May 16 09:39:08.983878 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. May 16 09:39:08.984152 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. May 16 09:39:08.990734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 09:39:08.992602 kernel: loop4: detected capacity change from 0 to 189592 May 16 09:39:08.998601 kernel: loop5: detected capacity change from 0 to 107312 May 16 09:39:09.002184 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 09:39:09.002527 (sd-merge)[1219]: Merged extensions into '/usr'. May 16 09:39:09.006333 systemd[1]: Reload requested from client PID 1190 ('systemd-sysext') (unit systemd-sysext.service)... May 16 09:39:09.006351 systemd[1]: Reloading... May 16 09:39:09.061605 zram_generator::config[1249]: No configuration found. May 16 09:39:09.142971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 09:39:09.159603 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 09:39:09.207258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 09:39:09.207387 systemd[1]: Reloading finished in 200 ms. May 16 09:39:09.225022 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 09:39:09.226523 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 09:39:09.237903 systemd[1]: Starting ensure-sysext.service... May 16 09:39:09.239639 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 09:39:09.249665 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... May 16 09:39:09.249682 systemd[1]: Reloading... May 16 09:39:09.256941 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 09:39:09.257242 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 09:39:09.257525 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 09:39:09.257837 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 09:39:09.258513 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 09:39:09.258844 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. May 16 09:39:09.258966 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. May 16 09:39:09.261556 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. May 16 09:39:09.261673 systemd-tmpfiles[1281]: Skipping /boot May 16 09:39:09.270445 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. May 16 09:39:09.270552 systemd-tmpfiles[1281]: Skipping /boot May 16 09:39:09.302609 zram_generator::config[1314]: No configuration found. May 16 09:39:09.359093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 09:39:09.420204 systemd[1]: Reloading finished in 170 ms. May 16 09:39:09.442413 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 09:39:09.449053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 09:39:09.460728 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 09:39:09.463067 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 09:39:09.465222 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 09:39:09.469738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 09:39:09.472160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 09:39:09.475781 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 09:39:09.480151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 09:39:09.489814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 09:39:09.493304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 09:39:09.495522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 09:39:09.497824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 09:39:09.497947 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 09:39:09.501623 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 09:39:09.508056 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 09:39:09.508223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 09:39:09.508298 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 09:39:09.511444 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 09:39:09.516963 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 09:39:09.518049 systemd-udevd[1349]: Using default interface naming scheme 'v255'. May 16 09:39:09.520643 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 09:39:09.522791 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 09:39:09.524539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 09:39:09.524845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 09:39:09.526490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 09:39:09.527811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 09:39:09.529516 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 09:39:09.531684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 09:39:09.533262 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 09:39:09.536748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 09:39:09.538456 augenrules[1376]: No rules May 16 09:39:09.539410 systemd[1]: audit-rules.service: Deactivated successfully. May 16 09:39:09.539711 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 09:39:09.552127 systemd[1]: Finished ensure-sysext.service. May 16 09:39:09.559809 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 09:39:09.560822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 09:39:09.563616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 09:39:09.573518 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 09:39:09.575752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 09:39:09.580797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 09:39:09.583840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 09:39:09.583889 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 09:39:09.586437 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 09:39:09.589382 augenrules[1413]: /sbin/augenrules: No change May 16 09:39:09.589862 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 09:39:09.592778 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 09:39:09.593138 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 09:39:09.594518 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 09:39:09.594755 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 09:39:09.603537 augenrules[1445]: No rules May 16 09:39:09.604869 systemd[1]: audit-rules.service: Deactivated successfully. May 16 09:39:09.616966 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 09:39:09.618382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 09:39:09.618558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 09:39:09.623205 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 16 09:39:09.626003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 09:39:09.629710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 09:39:09.648373 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 09:39:09.648544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 09:39:09.668521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 09:39:09.678745 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 09:39:09.679908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 09:39:09.679988 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 09:39:09.712050 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 09:39:09.731160 systemd-resolved[1347]: Positive Trust Anchors: May 16 09:39:09.737569 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 09:39:09.738654 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 09:39:09.739194 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 09:39:09.740929 systemd[1]: Reached target time-set.target - System Time Set. May 16 09:39:09.752158 systemd-resolved[1347]: Defaulting to hostname 'linux'. May 16 09:39:09.759139 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 09:39:09.760459 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 09:39:09.762147 systemd[1]: Reached target sysinit.target - System Initialization. May 16 09:39:09.763482 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 09:39:09.764238 systemd-networkd[1429]: lo: Link UP May 16 09:39:09.764252 systemd-networkd[1429]: lo: Gained carrier May 16 09:39:09.765123 systemd-networkd[1429]: Enumeration completed May 16 09:39:09.765134 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 09:39:09.765638 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 09:39:09.765647 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 09:39:09.766101 systemd-networkd[1429]: eth0: Link UP May 16 09:39:09.766217 systemd-networkd[1429]: eth0: Gained carrier May 16 09:39:09.766233 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 09:39:09.766645 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 09:39:09.768044 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 09:39:09.769158 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 09:39:09.770568 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 09:39:09.770618 systemd[1]: Reached target paths.target - Path Units. May 16 09:39:09.771293 systemd[1]: Reached target timers.target - Timer Units. May 16 09:39:09.773427 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 09:39:09.775871 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 09:39:09.783850 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 09:39:09.785830 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 09:39:09.787078 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 09:39:09.788701 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 09:39:09.789214 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 16 09:39:09.795188 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 09:39:09.795247 systemd-timesyncd[1433]: Initial clock synchronization to Fri 2025-05-16 09:39:09.816318 UTC. May 16 09:39:09.805544 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 09:39:09.812440 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 09:39:09.818025 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 09:39:09.819693 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 09:39:09.827317 systemd[1]: Reached target network.target - Network. May 16 09:39:09.828371 systemd[1]: Reached target sockets.target - Socket Units. May 16 09:39:09.829555 systemd[1]: Reached target basic.target - Basic System. May 16 09:39:09.830464 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 09:39:09.830610 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 09:39:09.833859 systemd[1]: Starting containerd.service - containerd container runtime... May 16 09:39:09.836129 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 09:39:09.838216 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 09:39:09.844439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 09:39:09.846593 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 09:39:09.847641 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 09:39:09.848697 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 09:39:09.852690 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 09:39:09.855892 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 09:39:09.859037 jq[1486]: false May 16 09:39:09.869815 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 09:39:09.881850 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 09:39:09.884098 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 09:39:09.886196 extend-filesystems[1487]: Found loop3 May 16 09:39:09.887184 extend-filesystems[1487]: Found loop4 May 16 09:39:09.887184 extend-filesystems[1487]: Found loop5 May 16 09:39:09.887184 extend-filesystems[1487]: Found vda May 16 09:39:09.887184 extend-filesystems[1487]: Found vda1 May 16 09:39:09.887184 extend-filesystems[1487]: Found vda2 May 16 09:39:09.887184 extend-filesystems[1487]: Found vda3 May 16 09:39:09.887184 extend-filesystems[1487]: Found usr May 16 09:39:09.897566 extend-filesystems[1487]: Found vda4 May 16 09:39:09.897566 extend-filesystems[1487]: Found vda6 May 16 09:39:09.897566 extend-filesystems[1487]: Found vda7 May 16 09:39:09.897566 extend-filesystems[1487]: Found vda9 May 16 09:39:09.897566 extend-filesystems[1487]: Checking size of /dev/vda9 May 16 09:39:09.887775 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 09:39:09.890781 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 09:39:09.893848 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 09:39:09.895031 systemd[1]: Starting update-engine.service - Update Engine... May 16 09:39:09.897863 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 09:39:09.913623 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 09:39:09.915405 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 09:39:09.917779 jq[1506]: true May 16 09:39:09.917974 extend-filesystems[1487]: Resized partition /dev/vda9 May 16 09:39:09.916371 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 09:39:09.917172 systemd[1]: motdgen.service: Deactivated successfully. May 16 09:39:09.917351 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 09:39:09.919454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 09:39:09.919653 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 09:39:09.930694 extend-filesystems[1511]: resize2fs 1.47.2 (1-Jan-2025) May 16 09:39:09.938318 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 09:39:09.942869 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 09:39:09.947729 jq[1513]: true May 16 09:39:09.950438 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 09:39:09.978104 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 09:39:09.985722 update_engine[1505]: I20250516 09:39:09.985026 1505 main.cc:92] Flatcar Update Engine starting May 16 09:39:09.986635 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 09:39:09.996787 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (Power Button) May 16 09:39:10.000784 systemd-logind[1496]: New seat seat0. May 16 09:39:10.001153 extend-filesystems[1511]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 09:39:10.001153 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 09:39:10.001153 extend-filesystems[1511]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 09:39:10.014947 extend-filesystems[1487]: Resized filesystem in /dev/vda9 May 16 09:39:10.003148 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 09:39:10.003395 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 09:39:10.023258 dbus-daemon[1484]: [system] SELinux support is enabled May 16 09:39:10.028507 update_engine[1505]: I20250516 09:39:10.028458 1505 update_check_scheduler.cc:74] Next update check in 8m24s May 16 09:39:10.063410 systemd[1]: Started systemd-logind.service - User Login Management. May 16 09:39:10.064934 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 09:39:10.066208 bash[1547]: Updated "/home/core/.ssh/authorized_keys" May 16 09:39:10.069608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:39:10.071016 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 09:39:10.081453 tar[1512]: linux-arm64/helm May 16 09:39:10.082275 systemd[1]: Started update-engine.service - Update Engine. May 16 09:39:10.083895 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 09:39:10.083989 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 09:39:10.083989 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 09:39:10.084010 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 09:39:10.085384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 09:39:10.085410 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 09:39:10.088740 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 09:39:10.161344 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 09:39:10.209105 containerd[1514]: time="2025-05-16T09:39:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 09:39:10.212597 containerd[1514]: time="2025-05-16T09:39:10.211049203Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 09:39:10.222014 containerd[1514]: time="2025-05-16T09:39:10.221968353Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.487µs" May 16 09:39:10.222014 containerd[1514]: time="2025-05-16T09:39:10.222003656Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 09:39:10.222014 containerd[1514]: time="2025-05-16T09:39:10.222020948Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 09:39:10.222203 containerd[1514]: time="2025-05-16T09:39:10.222174569Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 09:39:10.222203 containerd[1514]: time="2025-05-16T09:39:10.222196384Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 09:39:10.222242 containerd[1514]: time="2025-05-16T09:39:10.222219919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 09:39:10.222285 containerd[1514]: time="2025-05-16T09:39:10.222267911Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 09:39:10.222306 containerd[1514]: time="2025-05-16T09:39:10.222283442Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 09:39:10.222524 containerd[1514]: time="2025-05-16T09:39:10.222495142Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 09:39:10.222524 containerd[1514]: time="2025-05-16T09:39:10.222517356Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 09:39:10.222560 containerd[1514]: time="2025-05-16T09:39:10.222528244Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 09:39:10.222560 containerd[1514]: time="2025-05-16T09:39:10.222536609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 09:39:10.222664 containerd[1514]: time="2025-05-16T09:39:10.222648723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 09:39:10.222866 containerd[1514]: time="2025-05-16T09:39:10.222837368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 09:39:10.222898 containerd[1514]: time="2025-05-16T09:39:10.222883719Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 09:39:10.222920 containerd[1514]: time="2025-05-16T09:39:10.222897728Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 09:39:10.222944 containerd[1514]: time="2025-05-16T09:39:10.222933432Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 09:39:10.223161 containerd[1514]: time="2025-05-16T09:39:10.223145252Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 09:39:10.223223 containerd[1514]: time="2025-05-16T09:39:10.223208214Z" level=info msg="metadata content store policy set" policy=shared May 16 09:39:10.228049 containerd[1514]: time="2025-05-16T09:39:10.228017038Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 09:39:10.228126 containerd[1514]: time="2025-05-16T09:39:10.228070313Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 09:39:10.228126 containerd[1514]: time="2025-05-16T09:39:10.228086084Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 09:39:10.228126 containerd[1514]: time="2025-05-16T09:39:10.228099132Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 09:39:10.228126 containerd[1514]: time="2025-05-16T09:39:10.228111941Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 09:39:10.228126 containerd[1514]: time="2025-05-16T09:39:10.228125630Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 09:39:10.228296 containerd[1514]: time="2025-05-16T09:39:10.228139119Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 09:39:10.228296 containerd[1514]: time="2025-05-16T09:39:10.228152248Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 09:39:10.228296 containerd[1514]: time="2025-05-16T09:39:10.228163575Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 09:39:10.228296 containerd[1514]: time="2025-05-16T09:39:10.228174022Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 09:39:10.228296 containerd[1514]: time="2025-05-16T09:39:10.228183588Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 09:39:10.228296 containerd[1514]: time="2025-05-16T09:39:10.228196677Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 09:39:10.228548 containerd[1514]: time="2025-05-16T09:39:10.228321360Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 09:39:10.228548 containerd[1514]: time="2025-05-16T09:39:10.228352060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 09:39:10.228626 containerd[1514]: time="2025-05-16T09:39:10.228605388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 09:39:10.228645 containerd[1514]: time="2025-05-16T09:39:10.228631645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 09:39:10.228662 containerd[1514]: time="2025-05-16T09:39:10.228650538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 09:39:10.228680 containerd[1514]: time="2025-05-16T09:39:10.228666668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 09:39:10.228706 containerd[1514]: time="2025-05-16T09:39:10.228684680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 09:39:10.228706 containerd[1514]: time="2025-05-16T09:39:10.228697409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 09:39:10.228743 containerd[1514]: time="2025-05-16T09:39:10.228714660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 09:39:10.228743 containerd[1514]: time="2025-05-16T09:39:10.228730631Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 09:39:10.228776 containerd[1514]: time="2025-05-16T09:39:10.228744880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 09:39:10.229186 containerd[1514]: time="2025-05-16T09:39:10.229157193Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 09:39:10.229680 containerd[1514]: time="2025-05-16T09:39:10.229257900Z" level=info msg="Start snapshots syncer" May 16 09:39:10.229680 containerd[1514]: time="2025-05-16T09:39:10.229288520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 09:39:10.230387 containerd[1514]: time="2025-05-16T09:39:10.230337535Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 09:39:10.230562 containerd[1514]: time="2025-05-16T09:39:10.230542430Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 09:39:10.230816 containerd[1514]: time="2025-05-16T09:39:10.230789114Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 09:39:10.231082 containerd[1514]: time="2025-05-16T09:39:10.231058172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 09:39:10.231179 containerd[1514]: time="2025-05-16T09:39:10.231162481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 09:39:10.231240 containerd[1514]: time="2025-05-16T09:39:10.231228925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 09:39:10.231358 containerd[1514]: time="2025-05-16T09:39:10.231342800Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 09:39:10.231491 containerd[1514]: time="2025-05-16T09:39:10.231476009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 09:39:10.231577 containerd[1514]: time="2025-05-16T09:39:10.231563787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 09:39:10.231744 containerd[1514]: time="2025-05-16T09:39:10.231728816Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 09:39:10.231829 containerd[1514]: time="2025-05-16T09:39:10.231816474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 09:39:10.231952 containerd[1514]: time="2025-05-16T09:39:10.231884759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 09:39:10.232023 containerd[1514]: time="2025-05-16T09:39:10.232010643Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 09:39:10.232113 containerd[1514]: time="2025-05-16T09:39:10.232098901Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 09:39:10.232308 containerd[1514]: time="2025-05-16T09:39:10.232241996Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 09:39:10.232308 containerd[1514]: time="2025-05-16T09:39:10.232258607Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 09:39:10.232308 containerd[1514]: time="2025-05-16T09:39:10.232269454Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 09:39:10.232308 containerd[1514]: time="2025-05-16T09:39:10.232277779Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 09:39:10.232308 containerd[1514]: time="2025-05-16T09:39:10.232289107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 09:39:10.232434 containerd[1514]: time="2025-05-16T09:39:10.232420314Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 09:39:10.232561 containerd[1514]: time="2025-05-16T09:39:10.232551401Z" level=info msg="runtime interface created" May 16 09:39:10.232683 containerd[1514]: time="2025-05-16T09:39:10.232595510Z" level=info msg="created NRI interface" May 16 09:39:10.232683 containerd[1514]: time="2025-05-16T09:39:10.232608719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 09:39:10.232683 containerd[1514]: time="2025-05-16T09:39:10.232622087Z" level=info msg="Connect containerd service" May 16 09:39:10.232781 containerd[1514]: time="2025-05-16T09:39:10.232660353Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 09:39:10.233837 containerd[1514]: time="2025-05-16T09:39:10.233740548Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 09:39:10.343330 containerd[1514]: time="2025-05-16T09:39:10.343292529Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 09:39:10.343592 containerd[1514]: time="2025-05-16T09:39:10.343522282Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 09:39:10.343703 containerd[1514]: time="2025-05-16T09:39:10.343443429Z" level=info msg="Start subscribing containerd event" May 16 09:39:10.343825 containerd[1514]: time="2025-05-16T09:39:10.343768044Z" level=info msg="Start recovering state" May 16 09:39:10.343915 containerd[1514]: time="2025-05-16T09:39:10.343901773Z" level=info msg="Start event monitor" May 16 09:39:10.343986 containerd[1514]: time="2025-05-16T09:39:10.343974661Z" level=info msg="Start cni network conf syncer for default" May 16 09:39:10.344171 containerd[1514]: time="2025-05-16T09:39:10.344020852Z" level=info msg="Start streaming server" May 16 09:39:10.344171 containerd[1514]: time="2025-05-16T09:39:10.344036822Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 09:39:10.344171 containerd[1514]: time="2025-05-16T09:39:10.344045908Z" level=info msg="runtime interface starting up..." May 16 09:39:10.344171 containerd[1514]: time="2025-05-16T09:39:10.344052593Z" level=info msg="starting plugins..." May 16 09:39:10.344171 containerd[1514]: time="2025-05-16T09:39:10.344069564Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 09:39:10.345664 containerd[1514]: time="2025-05-16T09:39:10.345646328Z" level=info msg="containerd successfully booted in 0.136950s" May 16 09:39:10.345757 systemd[1]: Started containerd.service - containerd container runtime. May 16 09:39:10.431502 tar[1512]: linux-arm64/LICENSE May 16 09:39:10.431651 tar[1512]: linux-arm64/README.md May 16 09:39:10.456621 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 09:39:11.052080 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 09:39:11.070265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 09:39:11.074799 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 09:39:11.095752 systemd[1]: issuegen.service: Deactivated successfully. May 16 09:39:11.095983 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 09:39:11.098633 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 09:39:11.127900 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 09:39:11.131739 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 09:39:11.134597 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 16 09:39:11.135865 systemd[1]: Reached target getty.target - Login Prompts. May 16 09:39:11.740784 systemd-networkd[1429]: eth0: Gained IPv6LL May 16 09:39:11.743303 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 09:39:11.745280 systemd[1]: Reached target network-online.target - Network is Online. May 16 09:39:11.747762 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 09:39:11.750075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:39:11.763342 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 09:39:11.778538 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 09:39:11.779669 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 09:39:11.781293 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 09:39:11.787545 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 09:39:12.310096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:12.312000 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 09:39:12.315037 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 09:39:12.316821 systemd[1]: Startup finished in 2.069s (kernel) + 5.605s (initrd) + 4.132s (userspace) = 11.807s. May 16 09:39:12.797122 kubelet[1621]: E0516 09:39:12.796982 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 09:39:12.799659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 09:39:12.799797 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 09:39:12.800119 systemd[1]: kubelet.service: Consumed 813ms CPU time, 232.1M memory peak. May 16 09:39:15.984040 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 09:39:15.985169 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:38142.service - OpenSSH per-connection server daemon (10.0.0.1:38142). May 16 09:39:16.072303 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 38142 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:39:16.074160 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:39:16.081855 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 09:39:16.082774 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 09:39:16.089449 systemd-logind[1496]: New session 1 of user core. May 16 09:39:16.106860 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 09:39:16.109856 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 09:39:16.125625 (systemd)[1638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 09:39:16.127719 systemd-logind[1496]: New session c1 of user core. May 16 09:39:16.243039 systemd[1638]: Queued start job for default target default.target. May 16 09:39:16.265466 systemd[1638]: Created slice app.slice - User Application Slice. May 16 09:39:16.265507 systemd[1638]: Reached target paths.target - Paths. May 16 09:39:16.265547 systemd[1638]: Reached target timers.target - Timers. May 16 09:39:16.266745 systemd[1638]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 09:39:16.275417 systemd[1638]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 09:39:16.275478 systemd[1638]: Reached target sockets.target - Sockets. May 16 09:39:16.275526 systemd[1638]: Reached target basic.target - Basic System. May 16 09:39:16.275555 systemd[1638]: Reached target default.target - Main User Target. May 16 09:39:16.275601 systemd[1638]: Startup finished in 142ms. May 16 09:39:16.275815 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 09:39:16.277229 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 09:39:16.342834 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:38154.service - OpenSSH per-connection server daemon (10.0.0.1:38154). May 16 09:39:16.395597 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 38154 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:39:16.396698 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:39:16.401287 systemd-logind[1496]: New session 2 of user core. May 16 09:39:16.410753 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 09:39:16.462500 sshd[1651]: Connection closed by 10.0.0.1 port 38154 May 16 09:39:16.462780 sshd-session[1649]: pam_unix(sshd:session): session closed for user core May 16 09:39:16.475702 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:38154.service: Deactivated successfully. May 16 09:39:16.478821 systemd[1]: session-2.scope: Deactivated successfully. May 16 09:39:16.479409 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. May 16 09:39:16.481422 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:38156.service - OpenSSH per-connection server daemon (10.0.0.1:38156). May 16 09:39:16.481891 systemd-logind[1496]: Removed session 2. May 16 09:39:16.532694 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 38156 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:39:16.533890 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:39:16.537486 systemd-logind[1496]: New session 3 of user core. May 16 09:39:16.546730 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 09:39:16.594634 sshd[1659]: Connection closed by 10.0.0.1 port 38156 May 16 09:39:16.594655 sshd-session[1657]: pam_unix(sshd:session): session closed for user core May 16 09:39:16.603371 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:38156.service: Deactivated successfully. May 16 09:39:16.604601 systemd[1]: session-3.scope: Deactivated successfully. May 16 09:39:16.606168 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. May 16 09:39:16.607986 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:38158.service - OpenSSH per-connection server daemon (10.0.0.1:38158). May 16 09:39:16.609016 systemd-logind[1496]: Removed session 3. May 16 09:39:16.663389 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 38158 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:39:16.664681 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:39:16.668645 systemd-logind[1496]: New session 4 of user core. May 16 09:39:16.685727 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 09:39:16.736438 sshd[1667]: Connection closed by 10.0.0.1 port 38158 May 16 09:39:16.736761 sshd-session[1665]: pam_unix(sshd:session): session closed for user core May 16 09:39:16.745713 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:38158.service: Deactivated successfully. May 16 09:39:16.748049 systemd[1]: session-4.scope: Deactivated successfully. May 16 09:39:16.748777 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. May 16 09:39:16.751938 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). May 16 09:39:16.752329 systemd-logind[1496]: Removed session 4. May 16 09:39:16.798551 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:39:16.799844 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:39:16.804035 systemd-logind[1496]: New session 5 of user core. May 16 09:39:16.811704 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 09:39:16.874243 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 09:39:16.874511 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 09:39:16.898193 sudo[1676]: pam_unix(sudo:session): session closed for user root May 16 09:39:16.899716 sshd[1675]: Connection closed by 10.0.0.1 port 38168 May 16 09:39:16.900101 sshd-session[1673]: pam_unix(sshd:session): session closed for user core May 16 09:39:16.911563 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:38168.service: Deactivated successfully. May 16 09:39:16.913155 systemd[1]: session-5.scope: Deactivated successfully. May 16 09:39:16.915121 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. May 16 09:39:16.917471 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:38172.service - OpenSSH per-connection server daemon (10.0.0.1:38172). May 16 09:39:16.918357 systemd-logind[1496]: Removed session 5. May 16 09:39:16.973254 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 38172 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:39:16.974484 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:39:16.978885 systemd-logind[1496]: New session 6 of user core. May 16 09:39:16.991745 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 09:39:17.042800 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 09:39:17.043355 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 09:39:17.104822 sudo[1686]: pam_unix(sudo:session): session closed for user root May 16 09:39:17.109807 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 09:39:17.110076 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 09:39:17.118251 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 09:39:17.150325 augenrules[1708]: No rules May 16 09:39:17.151465 systemd[1]: audit-rules.service: Deactivated successfully. May 16 09:39:17.153635 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 09:39:17.154627 sudo[1685]: pam_unix(sudo:session): session closed for user root May 16 09:39:17.155943 sshd[1684]: Connection closed by 10.0.0.1 port 38172 May 16 09:39:17.156500 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 16 09:39:17.171194 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:38172.service: Deactivated successfully. May 16 09:39:17.173997 systemd[1]: session-6.scope: Deactivated successfully. May 16 09:39:17.174750 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. May 16 09:39:17.178058 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:38178.service - OpenSSH per-connection server daemon (10.0.0.1:38178). May 16 09:39:17.178618 systemd-logind[1496]: Removed session 6. May 16 09:39:17.225330 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 38178 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:39:17.226528 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:39:17.230872 systemd-logind[1496]: New session 7 of user core. May 16 09:39:17.247756 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 09:39:17.298143 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 09:39:17.298402 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 09:39:17.699630 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 09:39:17.712866 (dockerd)[1740]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 09:39:17.983448 dockerd[1740]: time="2025-05-16T09:39:17.983199111Z" level=info msg="Starting up" May 16 09:39:17.984019 dockerd[1740]: time="2025-05-16T09:39:17.983987409Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 09:39:18.025330 dockerd[1740]: time="2025-05-16T09:39:18.025180422Z" level=info msg="Loading containers: start." May 16 09:39:18.036533 kernel: Initializing XFRM netlink socket May 16 09:39:18.225626 systemd-networkd[1429]: docker0: Link UP May 16 09:39:18.228712 dockerd[1740]: time="2025-05-16T09:39:18.228676361Z" level=info msg="Loading containers: done." May 16 09:39:18.240741 dockerd[1740]: time="2025-05-16T09:39:18.240651353Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 09:39:18.240741 dockerd[1740]: time="2025-05-16T09:39:18.240730314Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 09:39:18.240880 dockerd[1740]: time="2025-05-16T09:39:18.240826643Z" level=info msg="Initializing buildkit" May 16 09:39:18.264573 dockerd[1740]: time="2025-05-16T09:39:18.264519735Z" level=info msg="Completed buildkit initialization" May 16 09:39:18.268949 dockerd[1740]: time="2025-05-16T09:39:18.268913192Z" level=info msg="Daemon has completed initialization" May 16 09:39:18.269163 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 09:39:18.269492 dockerd[1740]: time="2025-05-16T09:39:18.268972622Z" level=info msg="API listen on /run/docker.sock" May 16 09:39:19.031361 containerd[1514]: time="2025-05-16T09:39:19.031324509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 09:39:19.754361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079470962.mount: Deactivated successfully. May 16 09:39:21.023731 containerd[1514]: time="2025-05-16T09:39:21.023677970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:21.024126 containerd[1514]: time="2025-05-16T09:39:21.024082959Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651976" May 16 09:39:21.025016 containerd[1514]: time="2025-05-16T09:39:21.024948083Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:21.027295 containerd[1514]: time="2025-05-16T09:39:21.027267646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:21.028711 containerd[1514]: time="2025-05-16T09:39:21.028662137Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.99730005s" May 16 09:39:21.028711 containerd[1514]: time="2025-05-16T09:39:21.028702596Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 16 09:39:21.029283 containerd[1514]: time="2025-05-16T09:39:21.029253294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 09:39:22.438166 containerd[1514]: time="2025-05-16T09:39:22.438122767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:22.439058 containerd[1514]: time="2025-05-16T09:39:22.438818481Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459530" May 16 09:39:22.439690 containerd[1514]: time="2025-05-16T09:39:22.439653659Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:22.442206 containerd[1514]: time="2025-05-16T09:39:22.442179122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:22.443170 containerd[1514]: time="2025-05-16T09:39:22.443144519Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.413860611s" May 16 09:39:22.443207 containerd[1514]: time="2025-05-16T09:39:22.443176533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 16 09:39:22.443711 containerd[1514]: time="2025-05-16T09:39:22.443687565Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 09:39:23.050380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 09:39:23.051904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:39:23.163712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:23.167331 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 09:39:23.202705 kubelet[2018]: E0516 09:39:23.202650 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 09:39:23.205808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 09:39:23.205952 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 09:39:23.206231 systemd[1]: kubelet.service: Consumed 128ms CPU time, 94.8M memory peak. May 16 09:39:23.954095 containerd[1514]: time="2025-05-16T09:39:23.953991525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:23.955035 containerd[1514]: time="2025-05-16T09:39:23.954914369Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125281" May 16 09:39:23.955672 containerd[1514]: time="2025-05-16T09:39:23.955645970Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:23.958481 containerd[1514]: time="2025-05-16T09:39:23.958428630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:23.959493 containerd[1514]: time="2025-05-16T09:39:23.959450998Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.515729978s" May 16 09:39:23.959564 containerd[1514]: time="2025-05-16T09:39:23.959495778Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 16 09:39:23.959942 containerd[1514]: time="2025-05-16T09:39:23.959918763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 09:39:25.048515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734442784.mount: Deactivated successfully. May 16 09:39:25.284075 containerd[1514]: time="2025-05-16T09:39:25.284023387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:25.285333 containerd[1514]: time="2025-05-16T09:39:25.285273381Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871377" May 16 09:39:25.286085 containerd[1514]: time="2025-05-16T09:39:25.286040297Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:25.287985 containerd[1514]: time="2025-05-16T09:39:25.287948202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:25.289033 containerd[1514]: time="2025-05-16T09:39:25.288922763Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.328976108s" May 16 09:39:25.289033 containerd[1514]: time="2025-05-16T09:39:25.288953896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 16 09:39:25.289384 containerd[1514]: time="2025-05-16T09:39:25.289361423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 16 09:39:25.869826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931443841.mount: Deactivated successfully. May 16 09:39:26.795872 containerd[1514]: time="2025-05-16T09:39:26.795807736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:26.796265 containerd[1514]: time="2025-05-16T09:39:26.796155875Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 16 09:39:26.797217 containerd[1514]: time="2025-05-16T09:39:26.797187686Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:26.799501 containerd[1514]: time="2025-05-16T09:39:26.799453389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:26.800886 containerd[1514]: time="2025-05-16T09:39:26.800850505Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.51145819s" May 16 09:39:26.800925 containerd[1514]: time="2025-05-16T09:39:26.800886400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 16 09:39:26.801388 containerd[1514]: time="2025-05-16T09:39:26.801365071Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 09:39:27.304216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342761381.mount: Deactivated successfully. May 16 09:39:27.308784 containerd[1514]: time="2025-05-16T09:39:27.308742532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 09:39:27.309689 containerd[1514]: time="2025-05-16T09:39:27.309655284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 16 09:39:27.310441 containerd[1514]: time="2025-05-16T09:39:27.310408335Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 09:39:27.312125 containerd[1514]: time="2025-05-16T09:39:27.312088024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 09:39:27.312757 containerd[1514]: time="2025-05-16T09:39:27.312721428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 511.326866ms" May 16 09:39:27.312757 containerd[1514]: time="2025-05-16T09:39:27.312751640Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 09:39:27.313232 containerd[1514]: time="2025-05-16T09:39:27.313183006Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 09:39:27.835541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523927415.mount: Deactivated successfully. May 16 09:39:30.012107 containerd[1514]: time="2025-05-16T09:39:30.011667876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:30.012511 containerd[1514]: time="2025-05-16T09:39:30.012482842Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 16 09:39:30.013111 containerd[1514]: time="2025-05-16T09:39:30.013067288Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:30.015609 containerd[1514]: time="2025-05-16T09:39:30.015573047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:30.017355 containerd[1514]: time="2025-05-16T09:39:30.017318100Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.704108723s" May 16 09:39:30.017425 containerd[1514]: time="2025-05-16T09:39:30.017358114Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 16 09:39:33.457170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 09:39:33.459003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:39:33.571180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:33.586844 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 09:39:33.620101 kubelet[2169]: E0516 09:39:33.620055 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 09:39:33.622502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 09:39:33.622637 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 09:39:33.623047 systemd[1]: kubelet.service: Consumed 120ms CPU time, 95M memory peak. May 16 09:39:36.209772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:36.209908 systemd[1]: kubelet.service: Consumed 120ms CPU time, 95M memory peak. May 16 09:39:36.211769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:39:36.232901 systemd[1]: Reload requested from client PID 2185 ('systemctl') (unit session-7.scope)... May 16 09:39:36.232915 systemd[1]: Reloading... May 16 09:39:36.313634 zram_generator::config[2233]: No configuration found. May 16 09:39:36.483927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 09:39:36.567786 systemd[1]: Reloading finished in 334 ms. May 16 09:39:36.606067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:36.609010 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 09:39:36.609515 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:39:36.609786 systemd[1]: kubelet.service: Deactivated successfully. May 16 09:39:36.610664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:36.610699 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.5M memory peak. May 16 09:39:36.612710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:39:36.718281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:36.722179 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 09:39:36.755428 kubelet[2275]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 09:39:36.755428 kubelet[2275]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 09:39:36.755428 kubelet[2275]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 09:39:36.755702 kubelet[2275]: I0516 09:39:36.755480 2275 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 09:39:37.867438 kubelet[2275]: I0516 09:39:37.867388 2275 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 16 09:39:37.867438 kubelet[2275]: I0516 09:39:37.867427 2275 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 09:39:37.867800 kubelet[2275]: I0516 09:39:37.867686 2275 server.go:929] "Client rotation is on, will bootstrap in background" May 16 09:39:37.984782 kubelet[2275]: E0516 09:39:37.984730 2275 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 16 09:39:37.985652 kubelet[2275]: I0516 09:39:37.985344 2275 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 09:39:37.999378 kubelet[2275]: I0516 09:39:37.999346 2275 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 09:39:38.002743 kubelet[2275]: I0516 09:39:38.002709 2275 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 09:39:38.003672 kubelet[2275]: I0516 09:39:38.003644 2275 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 09:39:38.003828 kubelet[2275]: I0516 09:39:38.003791 2275 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 09:39:38.004004 kubelet[2275]: I0516 09:39:38.003823 2275 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 09:39:38.004248 kubelet[2275]: I0516 09:39:38.004228 2275 topology_manager.go:138] "Creating topology manager with none policy" May 16 09:39:38.004248 kubelet[2275]: I0516 09:39:38.004242 2275 container_manager_linux.go:300] "Creating device plugin manager" May 16 09:39:38.004490 kubelet[2275]: I0516 09:39:38.004470 2275 state_mem.go:36] "Initialized new in-memory state store" May 16 09:39:38.006141 kubelet[2275]: I0516 09:39:38.006117 2275 kubelet.go:408] "Attempting to sync node with API server" May 16 09:39:38.006173 kubelet[2275]: I0516 09:39:38.006142 2275 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 09:39:38.006338 kubelet[2275]: I0516 09:39:38.006320 2275 kubelet.go:314] "Adding apiserver pod source" May 16 09:39:38.006338 kubelet[2275]: I0516 09:39:38.006338 2275 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 09:39:38.008566 kubelet[2275]: I0516 09:39:38.008533 2275 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 09:39:38.010267 kubelet[2275]: W0516 09:39:38.010175 2275 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 16 09:39:38.010267 kubelet[2275]: E0516 09:39:38.010231 2275 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 16 09:39:38.010422 kubelet[2275]: W0516 09:39:38.010364 2275 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 16 09:39:38.010422 kubelet[2275]: E0516 09:39:38.010417 2275 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 16 09:39:38.011167 kubelet[2275]: I0516 09:39:38.011153 2275 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 09:39:38.012514 kubelet[2275]: W0516 09:39:38.012489 2275 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 09:39:38.013363 kubelet[2275]: I0516 09:39:38.013280 2275 server.go:1269] "Started kubelet" May 16 09:39:38.015807 kubelet[2275]: I0516 09:39:38.015782 2275 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 09:39:38.021824 kubelet[2275]: I0516 09:39:38.021775 2275 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 09:39:38.023221 kubelet[2275]: I0516 09:39:38.023196 2275 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 09:39:38.023469 kubelet[2275]: I0516 09:39:38.023419 2275 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 09:39:38.023896 kubelet[2275]: E0516 09:39:38.023876 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:38.024028 kubelet[2275]: I0516 09:39:38.024003 2275 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 09:39:38.024517 kubelet[2275]: I0516 09:39:38.024487 2275 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 09:39:38.026155 kubelet[2275]: I0516 09:39:38.025712 2275 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 16 09:39:38.026155 kubelet[2275]: I0516 09:39:38.023206 2275 server.go:460] "Adding debug handlers to kubelet server" May 16 09:39:38.026503 kubelet[2275]: E0516 09:39:38.026267 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" May 16 09:39:38.026503 kubelet[2275]: I0516 09:39:38.026365 2275 reconciler.go:26] "Reconciler: start to sync state" May 16 09:39:38.027782 kubelet[2275]: I0516 09:39:38.027754 2275 factory.go:221] Registration of the systemd container factory successfully May 16 09:39:38.027866 kubelet[2275]: I0516 09:39:38.027848 2275 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 09:39:38.029156 kubelet[2275]: W0516 09:39:38.029106 2275 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 16 09:39:38.029324 kubelet[2275]: E0516 09:39:38.029177 2275 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 16 09:39:38.033027 kubelet[2275]: E0516 09:39:38.033000 2275 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 09:39:38.036512 kubelet[2275]: I0516 09:39:38.036123 2275 factory.go:221] Registration of the containerd container factory successfully May 16 09:39:38.036625 kubelet[2275]: E0516 09:39:38.035135 2275 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ff878c420465f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 09:39:38.013255263 +0000 UTC m=+1.287865512,LastTimestamp:2025-05-16 09:39:38.013255263 +0000 UTC m=+1.287865512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 09:39:38.044856 kubelet[2275]: I0516 09:39:38.044698 2275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 09:39:38.046160 kubelet[2275]: I0516 09:39:38.045874 2275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 09:39:38.046160 kubelet[2275]: I0516 09:39:38.045897 2275 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 09:39:38.046160 kubelet[2275]: I0516 09:39:38.045918 2275 kubelet.go:2321] "Starting kubelet main sync loop" May 16 09:39:38.046160 kubelet[2275]: E0516 09:39:38.045953 2275 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 09:39:38.047332 kubelet[2275]: I0516 09:39:38.047308 2275 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 09:39:38.047332 kubelet[2275]: I0516 09:39:38.047324 2275 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 09:39:38.047399 kubelet[2275]: I0516 09:39:38.047341 2275 state_mem.go:36] "Initialized new in-memory state store" May 16 09:39:38.049505 kubelet[2275]: W0516 09:39:38.049455 2275 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 16 09:39:38.049567 kubelet[2275]: E0516 09:39:38.049508 2275 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 16 09:39:38.110255 kubelet[2275]: I0516 09:39:38.110219 2275 policy_none.go:49] "None policy: Start" May 16 09:39:38.111080 kubelet[2275]: I0516 09:39:38.111018 2275 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 09:39:38.111132 kubelet[2275]: I0516 09:39:38.111116 2275 state_mem.go:35] "Initializing new in-memory state store" May 16 09:39:38.117374 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 09:39:38.124186 kubelet[2275]: E0516 09:39:38.124152 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:38.138607 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 09:39:38.141492 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 09:39:38.146454 kubelet[2275]: E0516 09:39:38.146426 2275 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 09:39:38.165416 kubelet[2275]: I0516 09:39:38.165394 2275 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 09:39:38.165878 kubelet[2275]: I0516 09:39:38.165602 2275 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 09:39:38.165878 kubelet[2275]: I0516 09:39:38.165619 2275 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 09:39:38.165960 kubelet[2275]: I0516 09:39:38.165936 2275 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 09:39:38.167480 kubelet[2275]: E0516 09:39:38.167455 2275 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 09:39:38.226803 kubelet[2275]: E0516 09:39:38.226759 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" May 16 09:39:38.267496 kubelet[2275]: I0516 09:39:38.267464 2275 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 09:39:38.269667 kubelet[2275]: E0516 09:39:38.269644 2275 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 16 09:39:38.354753 systemd[1]: Created slice kubepods-burstable-poda8e033e6c8428b3c10720c160ff04b80.slice - libcontainer container kubepods-burstable-poda8e033e6c8428b3c10720c160ff04b80.slice. May 16 09:39:38.366017 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 09:39:38.392877 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 09:39:38.428747 kubelet[2275]: I0516 09:39:38.428682 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8e033e6c8428b3c10720c160ff04b80-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e033e6c8428b3c10720c160ff04b80\") " pod="kube-system/kube-apiserver-localhost" May 16 09:39:38.428747 kubelet[2275]: I0516 09:39:38.428734 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8e033e6c8428b3c10720c160ff04b80-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8e033e6c8428b3c10720c160ff04b80\") " pod="kube-system/kube-apiserver-localhost" May 16 09:39:38.428866 kubelet[2275]: I0516 09:39:38.428756 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8e033e6c8428b3c10720c160ff04b80-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e033e6c8428b3c10720c160ff04b80\") " pod="kube-system/kube-apiserver-localhost" May 16 09:39:38.428866 kubelet[2275]: I0516 09:39:38.428785 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:38.428866 kubelet[2275]: I0516 09:39:38.428812 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:38.428866 kubelet[2275]: I0516 09:39:38.428849 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:38.428953 kubelet[2275]: I0516 09:39:38.428883 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:38.428953 kubelet[2275]: I0516 09:39:38.428905 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:38.428953 kubelet[2275]: I0516 09:39:38.428920 2275 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 09:39:38.470828 kubelet[2275]: I0516 09:39:38.470794 2275 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 09:39:38.472146 kubelet[2275]: E0516 09:39:38.471451 2275 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 16 09:39:38.627845 kubelet[2275]: E0516 09:39:38.627791 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" May 16 09:39:38.666015 containerd[1514]: time="2025-05-16T09:39:38.665909066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8e033e6c8428b3c10720c160ff04b80,Namespace:kube-system,Attempt:0,}" May 16 09:39:38.684219 containerd[1514]: time="2025-05-16T09:39:38.684144712Z" level=info msg="connecting to shim 498f131f4503dcb0d2eeef1ebf40a0668b6e3e7fb258d23f5102ca715cfe6bbb" address="unix:///run/containerd/s/bc6fddd9ff512c627dfb22b7cd9598d1c696273d77b3d32a1e73d34ee1695093" namespace=k8s.io protocol=ttrpc version=3 May 16 09:39:38.692231 containerd[1514]: time="2025-05-16T09:39:38.692197585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 09:39:38.696116 containerd[1514]: time="2025-05-16T09:39:38.696049234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 09:39:38.712755 systemd[1]: Started cri-containerd-498f131f4503dcb0d2eeef1ebf40a0668b6e3e7fb258d23f5102ca715cfe6bbb.scope - libcontainer container 498f131f4503dcb0d2eeef1ebf40a0668b6e3e7fb258d23f5102ca715cfe6bbb. May 16 09:39:38.719494 containerd[1514]: time="2025-05-16T09:39:38.719391790Z" level=info msg="connecting to shim d22138e5ca615755c210456dd04de8ce9426699f731c7dbdf960905c02cf9355" address="unix:///run/containerd/s/1546ca430657a2e73d97ac570db2985bfa7c8b871e8f35b769a2a2cf975970cd" namespace=k8s.io protocol=ttrpc version=3 May 16 09:39:38.723689 containerd[1514]: time="2025-05-16T09:39:38.723646229Z" level=info msg="connecting to shim bf965e149a8295d772493b35796a1ff73b92aee5ced5241271892e439d13f638" address="unix:///run/containerd/s/62dc173e60fda9ee29de6b62a78181eb89360a75d08cbea6d4bf83c78ddc51cf" namespace=k8s.io protocol=ttrpc version=3 May 16 09:39:38.745769 systemd[1]: Started cri-containerd-d22138e5ca615755c210456dd04de8ce9426699f731c7dbdf960905c02cf9355.scope - libcontainer container d22138e5ca615755c210456dd04de8ce9426699f731c7dbdf960905c02cf9355. May 16 09:39:38.749148 systemd[1]: Started cri-containerd-bf965e149a8295d772493b35796a1ff73b92aee5ced5241271892e439d13f638.scope - libcontainer container bf965e149a8295d772493b35796a1ff73b92aee5ced5241271892e439d13f638. May 16 09:39:38.754944 containerd[1514]: time="2025-05-16T09:39:38.754887536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8e033e6c8428b3c10720c160ff04b80,Namespace:kube-system,Attempt:0,} returns sandbox id \"498f131f4503dcb0d2eeef1ebf40a0668b6e3e7fb258d23f5102ca715cfe6bbb\"" May 16 09:39:38.759637 containerd[1514]: time="2025-05-16T09:39:38.759606741Z" level=info msg="CreateContainer within sandbox \"498f131f4503dcb0d2eeef1ebf40a0668b6e3e7fb258d23f5102ca715cfe6bbb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 09:39:38.767638 containerd[1514]: time="2025-05-16T09:39:38.767578192Z" level=info msg="Container 1c08c3fb82054bb27683481d45441de4db95eb3131c5e31a3c4de63f5f44d4c4: CDI devices from CRI Config.CDIDevices: []" May 16 09:39:38.778633 containerd[1514]: time="2025-05-16T09:39:38.778572546Z" level=info msg="CreateContainer within sandbox \"498f131f4503dcb0d2eeef1ebf40a0668b6e3e7fb258d23f5102ca715cfe6bbb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1c08c3fb82054bb27683481d45441de4db95eb3131c5e31a3c4de63f5f44d4c4\"" May 16 09:39:38.779330 containerd[1514]: time="2025-05-16T09:39:38.779296143Z" level=info msg="StartContainer for \"1c08c3fb82054bb27683481d45441de4db95eb3131c5e31a3c4de63f5f44d4c4\"" May 16 09:39:38.781292 containerd[1514]: time="2025-05-16T09:39:38.781262318Z" level=info msg="connecting to shim 1c08c3fb82054bb27683481d45441de4db95eb3131c5e31a3c4de63f5f44d4c4" address="unix:///run/containerd/s/bc6fddd9ff512c627dfb22b7cd9598d1c696273d77b3d32a1e73d34ee1695093" protocol=ttrpc version=3 May 16 09:39:38.786350 containerd[1514]: time="2025-05-16T09:39:38.786313934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d22138e5ca615755c210456dd04de8ce9426699f731c7dbdf960905c02cf9355\"" May 16 09:39:38.788865 containerd[1514]: time="2025-05-16T09:39:38.788834300Z" level=info msg="CreateContainer within sandbox \"d22138e5ca615755c210456dd04de8ce9426699f731c7dbdf960905c02cf9355\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 09:39:38.791359 containerd[1514]: time="2025-05-16T09:39:38.791326419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf965e149a8295d772493b35796a1ff73b92aee5ced5241271892e439d13f638\"" May 16 09:39:38.793525 containerd[1514]: time="2025-05-16T09:39:38.793499651Z" level=info msg="CreateContainer within sandbox \"bf965e149a8295d772493b35796a1ff73b92aee5ced5241271892e439d13f638\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 09:39:38.801883 containerd[1514]: time="2025-05-16T09:39:38.801739094Z" level=info msg="Container 3f86ae855486cb933840213140e10f29a0a7288652f35b55d5a7ad6fe635329b: CDI devices from CRI Config.CDIDevices: []" May 16 09:39:38.802573 containerd[1514]: time="2025-05-16T09:39:38.802533791Z" level=info msg="Container a1f667a5397d630b567b19543fb6220d176b3db4f0566d6691ebe7a26def9f15: CDI devices from CRI Config.CDIDevices: []" May 16 09:39:38.807743 systemd[1]: Started cri-containerd-1c08c3fb82054bb27683481d45441de4db95eb3131c5e31a3c4de63f5f44d4c4.scope - libcontainer container 1c08c3fb82054bb27683481d45441de4db95eb3131c5e31a3c4de63f5f44d4c4. May 16 09:39:38.810012 containerd[1514]: time="2025-05-16T09:39:38.809922923Z" level=info msg="CreateContainer within sandbox \"d22138e5ca615755c210456dd04de8ce9426699f731c7dbdf960905c02cf9355\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f86ae855486cb933840213140e10f29a0a7288652f35b55d5a7ad6fe635329b\"" May 16 09:39:38.810510 containerd[1514]: time="2025-05-16T09:39:38.810486876Z" level=info msg="StartContainer for \"3f86ae855486cb933840213140e10f29a0a7288652f35b55d5a7ad6fe635329b\"" May 16 09:39:38.811212 containerd[1514]: time="2025-05-16T09:39:38.811154538Z" level=info msg="CreateContainer within sandbox \"bf965e149a8295d772493b35796a1ff73b92aee5ced5241271892e439d13f638\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a1f667a5397d630b567b19543fb6220d176b3db4f0566d6691ebe7a26def9f15\"" May 16 09:39:38.811812 containerd[1514]: time="2025-05-16T09:39:38.811784590Z" level=info msg="connecting to shim 3f86ae855486cb933840213140e10f29a0a7288652f35b55d5a7ad6fe635329b" address="unix:///run/containerd/s/1546ca430657a2e73d97ac570db2985bfa7c8b871e8f35b769a2a2cf975970cd" protocol=ttrpc version=3 May 16 09:39:38.812332 containerd[1514]: time="2025-05-16T09:39:38.811866132Z" level=info msg="StartContainer for \"a1f667a5397d630b567b19543fb6220d176b3db4f0566d6691ebe7a26def9f15\"" May 16 09:39:38.813023 containerd[1514]: time="2025-05-16T09:39:38.812995920Z" level=info msg="connecting to shim a1f667a5397d630b567b19543fb6220d176b3db4f0566d6691ebe7a26def9f15" address="unix:///run/containerd/s/62dc173e60fda9ee29de6b62a78181eb89360a75d08cbea6d4bf83c78ddc51cf" protocol=ttrpc version=3 May 16 09:39:38.829856 systemd[1]: Started cri-containerd-3f86ae855486cb933840213140e10f29a0a7288652f35b55d5a7ad6fe635329b.scope - libcontainer container 3f86ae855486cb933840213140e10f29a0a7288652f35b55d5a7ad6fe635329b. May 16 09:39:38.835724 kubelet[2275]: W0516 09:39:38.834572 2275 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 16 09:39:38.835724 kubelet[2275]: E0516 09:39:38.835551 2275 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 16 09:39:38.834762 systemd[1]: Started cri-containerd-a1f667a5397d630b567b19543fb6220d176b3db4f0566d6691ebe7a26def9f15.scope - libcontainer container a1f667a5397d630b567b19543fb6220d176b3db4f0566d6691ebe7a26def9f15. May 16 09:39:38.854120 containerd[1514]: time="2025-05-16T09:39:38.852284018Z" level=info msg="StartContainer for \"1c08c3fb82054bb27683481d45441de4db95eb3131c5e31a3c4de63f5f44d4c4\" returns successfully" May 16 09:39:38.873988 kubelet[2275]: I0516 09:39:38.873930 2275 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 09:39:38.874271 kubelet[2275]: E0516 09:39:38.874248 2275 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 16 09:39:38.889071 containerd[1514]: time="2025-05-16T09:39:38.888994535Z" level=info msg="StartContainer for \"a1f667a5397d630b567b19543fb6220d176b3db4f0566d6691ebe7a26def9f15\" returns successfully" May 16 09:39:38.895895 containerd[1514]: time="2025-05-16T09:39:38.895824195Z" level=info msg="StartContainer for \"3f86ae855486cb933840213140e10f29a0a7288652f35b55d5a7ad6fe635329b\" returns successfully" May 16 09:39:38.915842 kubelet[2275]: W0516 09:39:38.915744 2275 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 16 09:39:38.915842 kubelet[2275]: E0516 09:39:38.915814 2275 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 16 09:39:39.675955 kubelet[2275]: I0516 09:39:39.675917 2275 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 09:39:40.489509 kubelet[2275]: E0516 09:39:40.489456 2275 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 09:39:40.585625 kubelet[2275]: I0516 09:39:40.585551 2275 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 09:39:40.585625 kubelet[2275]: E0516 09:39:40.585604 2275 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 09:39:40.601410 kubelet[2275]: E0516 09:39:40.601360 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:40.701884 kubelet[2275]: E0516 09:39:40.701843 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:40.802471 kubelet[2275]: E0516 09:39:40.802369 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:40.903032 kubelet[2275]: E0516 09:39:40.902993 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:41.003639 kubelet[2275]: E0516 09:39:41.003593 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:41.104463 kubelet[2275]: E0516 09:39:41.104364 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:41.204989 kubelet[2275]: E0516 09:39:41.204952 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:41.305545 kubelet[2275]: E0516 09:39:41.305502 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:41.406141 kubelet[2275]: E0516 09:39:41.406019 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:41.506793 kubelet[2275]: E0516 09:39:41.506749 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:41.607398 kubelet[2275]: E0516 09:39:41.607351 2275 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:42.009778 kubelet[2275]: I0516 09:39:42.009737 2275 apiserver.go:52] "Watching apiserver" May 16 09:39:42.026876 kubelet[2275]: I0516 09:39:42.026835 2275 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 16 09:39:42.668128 systemd[1]: Reload requested from client PID 2549 ('systemctl') (unit session-7.scope)... May 16 09:39:42.668151 systemd[1]: Reloading... May 16 09:39:42.733611 zram_generator::config[2598]: No configuration found. May 16 09:39:42.795988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 09:39:42.892231 systemd[1]: Reloading finished in 223 ms. May 16 09:39:42.930322 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:39:42.944042 systemd[1]: kubelet.service: Deactivated successfully. May 16 09:39:42.945636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:42.945686 systemd[1]: kubelet.service: Consumed 1.743s CPU time, 116.7M memory peak. May 16 09:39:42.947921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:39:43.075428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:39:43.078686 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 09:39:43.121029 kubelet[2634]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 09:39:43.121029 kubelet[2634]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 09:39:43.121029 kubelet[2634]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 09:39:43.121353 kubelet[2634]: I0516 09:39:43.121077 2634 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 09:39:43.126611 kubelet[2634]: I0516 09:39:43.126060 2634 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 16 09:39:43.126611 kubelet[2634]: I0516 09:39:43.126090 2634 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 09:39:43.126611 kubelet[2634]: I0516 09:39:43.126325 2634 server.go:929] "Client rotation is on, will bootstrap in background" May 16 09:39:43.127871 kubelet[2634]: I0516 09:39:43.127846 2634 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 09:39:43.130035 kubelet[2634]: I0516 09:39:43.129995 2634 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 09:39:43.134999 kubelet[2634]: I0516 09:39:43.134979 2634 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 09:39:43.137616 kubelet[2634]: I0516 09:39:43.137594 2634 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 09:39:43.137744 kubelet[2634]: I0516 09:39:43.137722 2634 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 09:39:43.137853 kubelet[2634]: I0516 09:39:43.137826 2634 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 09:39:43.138006 kubelet[2634]: I0516 09:39:43.137849 2634 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 09:39:43.138075 kubelet[2634]: I0516 09:39:43.138012 2634 topology_manager.go:138] "Creating topology manager with none policy" May 16 09:39:43.138075 kubelet[2634]: I0516 09:39:43.138021 2634 container_manager_linux.go:300] "Creating device plugin manager" May 16 09:39:43.138075 kubelet[2634]: I0516 09:39:43.138049 2634 state_mem.go:36] "Initialized new in-memory state store" May 16 09:39:43.138211 kubelet[2634]: I0516 09:39:43.138200 2634 kubelet.go:408] "Attempting to sync node with API server" May 16 09:39:43.138237 kubelet[2634]: I0516 09:39:43.138214 2634 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 09:39:43.138237 kubelet[2634]: I0516 09:39:43.138233 2634 kubelet.go:314] "Adding apiserver pod source" May 16 09:39:43.138289 kubelet[2634]: I0516 09:39:43.138266 2634 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 09:39:43.139277 kubelet[2634]: I0516 09:39:43.139229 2634 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 09:39:43.140339 kubelet[2634]: I0516 09:39:43.140306 2634 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 09:39:43.140803 kubelet[2634]: I0516 09:39:43.140778 2634 server.go:1269] "Started kubelet" May 16 09:39:43.140914 kubelet[2634]: I0516 09:39:43.140875 2634 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 09:39:43.141111 kubelet[2634]: I0516 09:39:43.141052 2634 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 09:39:43.141424 kubelet[2634]: I0516 09:39:43.141393 2634 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 09:39:43.141869 kubelet[2634]: I0516 09:39:43.141845 2634 server.go:460] "Adding debug handlers to kubelet server" May 16 09:39:43.144214 kubelet[2634]: I0516 09:39:43.144193 2634 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 09:39:43.144214 kubelet[2634]: I0516 09:39:43.144209 2634 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 09:39:43.144412 kubelet[2634]: I0516 09:39:43.144396 2634 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 09:39:43.147909 kubelet[2634]: I0516 09:39:43.147888 2634 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 16 09:39:43.148014 kubelet[2634]: E0516 09:39:43.144546 2634 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:39:43.148187 kubelet[2634]: I0516 09:39:43.148175 2634 reconciler.go:26] "Reconciler: start to sync state" May 16 09:39:43.148250 kubelet[2634]: E0516 09:39:43.146188 2634 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 09:39:43.148443 kubelet[2634]: I0516 09:39:43.148424 2634 factory.go:221] Registration of the containerd container factory successfully May 16 09:39:43.148510 kubelet[2634]: I0516 09:39:43.148500 2634 factory.go:221] Registration of the systemd container factory successfully May 16 09:39:43.148667 kubelet[2634]: I0516 09:39:43.148645 2634 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 09:39:43.167785 kubelet[2634]: I0516 09:39:43.167737 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 09:39:43.172001 kubelet[2634]: I0516 09:39:43.171969 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 09:39:43.172001 kubelet[2634]: I0516 09:39:43.171998 2634 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 09:39:43.172105 kubelet[2634]: I0516 09:39:43.172018 2634 kubelet.go:2321] "Starting kubelet main sync loop" May 16 09:39:43.172127 kubelet[2634]: E0516 09:39:43.172098 2634 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 09:39:43.191621 kubelet[2634]: I0516 09:39:43.191519 2634 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 09:39:43.191621 kubelet[2634]: I0516 09:39:43.191538 2634 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 09:39:43.191621 kubelet[2634]: I0516 09:39:43.191559 2634 state_mem.go:36] "Initialized new in-memory state store" May 16 09:39:43.191755 kubelet[2634]: I0516 09:39:43.191733 2634 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 09:39:43.191775 kubelet[2634]: I0516 09:39:43.191744 2634 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 09:39:43.191775 kubelet[2634]: I0516 09:39:43.191764 2634 policy_none.go:49] "None policy: Start" May 16 09:39:43.192885 kubelet[2634]: I0516 09:39:43.192864 2634 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 09:39:43.193533 kubelet[2634]: I0516 09:39:43.193041 2634 state_mem.go:35] "Initializing new in-memory state store" May 16 09:39:43.193533 kubelet[2634]: I0516 09:39:43.193199 2634 state_mem.go:75] "Updated machine memory state" May 16 09:39:43.197314 kubelet[2634]: I0516 09:39:43.197285 2634 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 09:39:43.197454 kubelet[2634]: I0516 09:39:43.197433 2634 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 09:39:43.197502 kubelet[2634]: I0516 09:39:43.197448 2634 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 09:39:43.197662 kubelet[2634]: I0516 09:39:43.197640 2634 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 09:39:43.299147 kubelet[2634]: I0516 09:39:43.299070 2634 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 09:39:43.304666 kubelet[2634]: I0516 09:39:43.304639 2634 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 09:39:43.304767 kubelet[2634]: I0516 09:39:43.304720 2634 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 09:39:43.349450 kubelet[2634]: I0516 09:39:43.349401 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8e033e6c8428b3c10720c160ff04b80-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e033e6c8428b3c10720c160ff04b80\") " pod="kube-system/kube-apiserver-localhost" May 16 09:39:43.349725 kubelet[2634]: I0516 09:39:43.349462 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8e033e6c8428b3c10720c160ff04b80-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e033e6c8428b3c10720c160ff04b80\") " pod="kube-system/kube-apiserver-localhost" May 16 09:39:43.349725 kubelet[2634]: I0516 09:39:43.349518 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:43.349725 kubelet[2634]: I0516 09:39:43.349611 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:43.349725 kubelet[2634]: I0516 09:39:43.349645 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 09:39:43.349914 kubelet[2634]: I0516 09:39:43.349661 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8e033e6c8428b3c10720c160ff04b80-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8e033e6c8428b3c10720c160ff04b80\") " pod="kube-system/kube-apiserver-localhost" May 16 09:39:43.349914 kubelet[2634]: I0516 09:39:43.349873 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:43.349914 kubelet[2634]: I0516 09:39:43.349891 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:43.350028 kubelet[2634]: I0516 09:39:43.350003 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:39:44.139184 kubelet[2634]: I0516 09:39:44.139147 2634 apiserver.go:52] "Watching apiserver" May 16 09:39:44.148893 kubelet[2634]: I0516 09:39:44.148866 2634 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 16 09:39:44.202547 kubelet[2634]: I0516 09:39:44.202471 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2024560260000001 podStartE2EDuration="1.202456026s" podCreationTimestamp="2025-05-16 09:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:39:44.202337639 +0000 UTC m=+1.120286320" watchObservedRunningTime="2025-05-16 09:39:44.202456026 +0000 UTC m=+1.120404667" May 16 09:39:44.209414 kubelet[2634]: I0516 09:39:44.209348 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.209299846 podStartE2EDuration="1.209299846s" podCreationTimestamp="2025-05-16 09:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:39:44.208467259 +0000 UTC m=+1.126415940" watchObservedRunningTime="2025-05-16 09:39:44.209299846 +0000 UTC m=+1.127248527" May 16 09:39:44.222396 kubelet[2634]: I0516 09:39:44.222306 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.222293211 podStartE2EDuration="1.222293211s" podCreationTimestamp="2025-05-16 09:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:39:44.215183211 +0000 UTC m=+1.133131892" watchObservedRunningTime="2025-05-16 09:39:44.222293211 +0000 UTC m=+1.140241892" May 16 09:39:48.152518 sudo[1720]: pam_unix(sudo:session): session closed for user root May 16 09:39:48.153696 sshd[1719]: Connection closed by 10.0.0.1 port 38178 May 16 09:39:48.154415 sshd-session[1717]: pam_unix(sshd:session): session closed for user core May 16 09:39:48.157947 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. May 16 09:39:48.158668 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:38178.service: Deactivated successfully. May 16 09:39:48.160512 systemd[1]: session-7.scope: Deactivated successfully. May 16 09:39:48.161443 systemd[1]: session-7.scope: Consumed 8.089s CPU time, 231.2M memory peak. May 16 09:39:48.163688 systemd-logind[1496]: Removed session 7. May 16 09:39:49.967071 kubelet[2634]: I0516 09:39:49.967032 2634 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 09:39:49.967408 containerd[1514]: time="2025-05-16T09:39:49.967334876Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 09:39:49.967613 kubelet[2634]: I0516 09:39:49.967559 2634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 09:39:50.725194 systemd[1]: Created slice kubepods-besteffort-pode22b3c2f_c4b6_452c_8c20_721e4b178d95.slice - libcontainer container kubepods-besteffort-pode22b3c2f_c4b6_452c_8c20_721e4b178d95.slice. May 16 09:39:50.823858 systemd[1]: Created slice kubepods-besteffort-podfa109f8c_3b55_4bfa_a89e_ce3353bd8aaf.slice - libcontainer container kubepods-besteffort-podfa109f8c_3b55_4bfa_a89e_ce3353bd8aaf.slice. May 16 09:39:50.896634 kubelet[2634]: I0516 09:39:50.896574 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jc4\" (UniqueName: \"kubernetes.io/projected/e22b3c2f-c4b6-452c-8c20-721e4b178d95-kube-api-access-99jc4\") pod \"kube-proxy-2nfgd\" (UID: \"e22b3c2f-c4b6-452c-8c20-721e4b178d95\") " pod="kube-system/kube-proxy-2nfgd" May 16 09:39:50.896805 kubelet[2634]: I0516 09:39:50.896648 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa109f8c-3b55-4bfa-a89e-ce3353bd8aaf-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-gpdxl\" (UID: \"fa109f8c-3b55-4bfa-a89e-ce3353bd8aaf\") " pod="tigera-operator/tigera-operator-6f6897fdc5-gpdxl" May 16 09:39:50.896805 kubelet[2634]: I0516 09:39:50.896693 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e22b3c2f-c4b6-452c-8c20-721e4b178d95-lib-modules\") pod \"kube-proxy-2nfgd\" (UID: \"e22b3c2f-c4b6-452c-8c20-721e4b178d95\") " pod="kube-system/kube-proxy-2nfgd" May 16 09:39:50.896805 kubelet[2634]: I0516 09:39:50.896712 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e22b3c2f-c4b6-452c-8c20-721e4b178d95-kube-proxy\") pod \"kube-proxy-2nfgd\" (UID: \"e22b3c2f-c4b6-452c-8c20-721e4b178d95\") " pod="kube-system/kube-proxy-2nfgd" May 16 09:39:50.896805 kubelet[2634]: I0516 09:39:50.896739 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e22b3c2f-c4b6-452c-8c20-721e4b178d95-xtables-lock\") pod \"kube-proxy-2nfgd\" (UID: \"e22b3c2f-c4b6-452c-8c20-721e4b178d95\") " pod="kube-system/kube-proxy-2nfgd" May 16 09:39:50.896805 kubelet[2634]: I0516 09:39:50.896783 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h6tl\" (UniqueName: \"kubernetes.io/projected/fa109f8c-3b55-4bfa-a89e-ce3353bd8aaf-kube-api-access-9h6tl\") pod \"tigera-operator-6f6897fdc5-gpdxl\" (UID: \"fa109f8c-3b55-4bfa-a89e-ce3353bd8aaf\") " pod="tigera-operator/tigera-operator-6f6897fdc5-gpdxl" May 16 09:39:51.043737 containerd[1514]: time="2025-05-16T09:39:51.043624114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nfgd,Uid:e22b3c2f-c4b6-452c-8c20-721e4b178d95,Namespace:kube-system,Attempt:0,}" May 16 09:39:51.061280 containerd[1514]: time="2025-05-16T09:39:51.061212284Z" level=info msg="connecting to shim 649a4ee3e58576bf261eca556129e9ad4f61e91c3fe05abb3566a16f71f702e4" address="unix:///run/containerd/s/95912656650e01261922d62da24d97905ba1915b524327808143dd3b2ba8a3b0" namespace=k8s.io protocol=ttrpc version=3 May 16 09:39:51.092792 systemd[1]: Started cri-containerd-649a4ee3e58576bf261eca556129e9ad4f61e91c3fe05abb3566a16f71f702e4.scope - libcontainer container 649a4ee3e58576bf261eca556129e9ad4f61e91c3fe05abb3566a16f71f702e4. May 16 09:39:51.117527 containerd[1514]: time="2025-05-16T09:39:51.117481106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nfgd,Uid:e22b3c2f-c4b6-452c-8c20-721e4b178d95,Namespace:kube-system,Attempt:0,} returns sandbox id \"649a4ee3e58576bf261eca556129e9ad4f61e91c3fe05abb3566a16f71f702e4\"" May 16 09:39:51.120603 containerd[1514]: time="2025-05-16T09:39:51.120480646Z" level=info msg="CreateContainer within sandbox \"649a4ee3e58576bf261eca556129e9ad4f61e91c3fe05abb3566a16f71f702e4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 09:39:51.129511 containerd[1514]: time="2025-05-16T09:39:51.129475148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-gpdxl,Uid:fa109f8c-3b55-4bfa-a89e-ce3353bd8aaf,Namespace:tigera-operator,Attempt:0,}" May 16 09:39:51.132530 containerd[1514]: time="2025-05-16T09:39:51.131273912Z" level=info msg="Container 9ecbe33562bf529ffc59aebc52b55abee451a5df8f0dd0c6ea38716468d521fa: CDI devices from CRI Config.CDIDevices: []" May 16 09:39:51.132556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516612395.mount: Deactivated successfully. May 16 09:39:51.141045 containerd[1514]: time="2025-05-16T09:39:51.141000865Z" level=info msg="CreateContainer within sandbox \"649a4ee3e58576bf261eca556129e9ad4f61e91c3fe05abb3566a16f71f702e4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ecbe33562bf529ffc59aebc52b55abee451a5df8f0dd0c6ea38716468d521fa\"" May 16 09:39:51.141799 containerd[1514]: time="2025-05-16T09:39:51.141766563Z" level=info msg="StartContainer for \"9ecbe33562bf529ffc59aebc52b55abee451a5df8f0dd0c6ea38716468d521fa\"" May 16 09:39:51.143263 containerd[1514]: time="2025-05-16T09:39:51.143230867Z" level=info msg="connecting to shim 9ecbe33562bf529ffc59aebc52b55abee451a5df8f0dd0c6ea38716468d521fa" address="unix:///run/containerd/s/95912656650e01261922d62da24d97905ba1915b524327808143dd3b2ba8a3b0" protocol=ttrpc version=3 May 16 09:39:51.151805 containerd[1514]: time="2025-05-16T09:39:51.151524882Z" level=info msg="connecting to shim b3df4011ba75e34161f4cc18a6bd55addc1166acf526413b3290b3a8c56d3da6" address="unix:///run/containerd/s/4a4c732b0b2135145fffbfe9dae7d189ac2e1dfbff9b2195b16fbc72cd63a92d" namespace=k8s.io protocol=ttrpc version=3 May 16 09:39:51.165747 systemd[1]: Started cri-containerd-9ecbe33562bf529ffc59aebc52b55abee451a5df8f0dd0c6ea38716468d521fa.scope - libcontainer container 9ecbe33562bf529ffc59aebc52b55abee451a5df8f0dd0c6ea38716468d521fa. May 16 09:39:51.168492 systemd[1]: Started cri-containerd-b3df4011ba75e34161f4cc18a6bd55addc1166acf526413b3290b3a8c56d3da6.scope - libcontainer container b3df4011ba75e34161f4cc18a6bd55addc1166acf526413b3290b3a8c56d3da6. May 16 09:39:51.203777 containerd[1514]: time="2025-05-16T09:39:51.203683283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-gpdxl,Uid:fa109f8c-3b55-4bfa-a89e-ce3353bd8aaf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b3df4011ba75e34161f4cc18a6bd55addc1166acf526413b3290b3a8c56d3da6\"" May 16 09:39:51.205747 containerd[1514]: time="2025-05-16T09:39:51.205681723Z" level=info msg="StartContainer for \"9ecbe33562bf529ffc59aebc52b55abee451a5df8f0dd0c6ea38716468d521fa\" returns successfully" May 16 09:39:51.211861 containerd[1514]: time="2025-05-16T09:39:51.211666162Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 16 09:39:52.823444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276050527.mount: Deactivated successfully. May 16 09:39:53.359193 containerd[1514]: time="2025-05-16T09:39:53.359154601Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:53.360313 containerd[1514]: time="2025-05-16T09:39:53.360280151Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 16 09:39:53.361304 containerd[1514]: time="2025-05-16T09:39:53.361271919Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:53.363836 containerd[1514]: time="2025-05-16T09:39:53.363802987Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:39:53.364443 containerd[1514]: time="2025-05-16T09:39:53.364325395Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.152626308s" May 16 09:39:53.364443 containerd[1514]: time="2025-05-16T09:39:53.364358681Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 16 09:39:53.368994 containerd[1514]: time="2025-05-16T09:39:53.368619482Z" level=info msg="CreateContainer within sandbox \"b3df4011ba75e34161f4cc18a6bd55addc1166acf526413b3290b3a8c56d3da6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 09:39:53.376731 containerd[1514]: time="2025-05-16T09:39:53.375377545Z" level=info msg="Container de276e407a77a5f48047bd5111b88c4a8f88a68b975461f0ff7b410e5a06b80e: CDI devices from CRI Config.CDIDevices: []" May 16 09:39:53.388288 containerd[1514]: time="2025-05-16T09:39:53.388248362Z" level=info msg="CreateContainer within sandbox \"b3df4011ba75e34161f4cc18a6bd55addc1166acf526413b3290b3a8c56d3da6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"de276e407a77a5f48047bd5111b88c4a8f88a68b975461f0ff7b410e5a06b80e\"" May 16 09:39:53.389754 containerd[1514]: time="2025-05-16T09:39:53.389731533Z" level=info msg="StartContainer for \"de276e407a77a5f48047bd5111b88c4a8f88a68b975461f0ff7b410e5a06b80e\"" May 16 09:39:53.390945 containerd[1514]: time="2025-05-16T09:39:53.390895890Z" level=info msg="connecting to shim de276e407a77a5f48047bd5111b88c4a8f88a68b975461f0ff7b410e5a06b80e" address="unix:///run/containerd/s/4a4c732b0b2135145fffbfe9dae7d189ac2e1dfbff9b2195b16fbc72cd63a92d" protocol=ttrpc version=3 May 16 09:39:53.420778 systemd[1]: Started cri-containerd-de276e407a77a5f48047bd5111b88c4a8f88a68b975461f0ff7b410e5a06b80e.scope - libcontainer container de276e407a77a5f48047bd5111b88c4a8f88a68b975461f0ff7b410e5a06b80e. May 16 09:39:53.448862 containerd[1514]: time="2025-05-16T09:39:53.448828209Z" level=info msg="StartContainer for \"de276e407a77a5f48047bd5111b88c4a8f88a68b975461f0ff7b410e5a06b80e\" returns successfully" May 16 09:39:54.220417 kubelet[2634]: I0516 09:39:54.220352 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2nfgd" podStartSLOduration=4.220336284 podStartE2EDuration="4.220336284s" podCreationTimestamp="2025-05-16 09:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:39:52.216983631 +0000 UTC m=+9.134932272" watchObservedRunningTime="2025-05-16 09:39:54.220336284 +0000 UTC m=+11.138284965" May 16 09:39:54.221349 kubelet[2634]: I0516 09:39:54.220474 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-gpdxl" podStartSLOduration=2.065673747 podStartE2EDuration="4.220469705s" podCreationTimestamp="2025-05-16 09:39:50 +0000 UTC" firstStartedPulling="2025-05-16 09:39:51.211309618 +0000 UTC m=+8.129258299" lastFinishedPulling="2025-05-16 09:39:53.366105576 +0000 UTC m=+10.284054257" observedRunningTime="2025-05-16 09:39:54.220051357 +0000 UTC m=+11.138000038" watchObservedRunningTime="2025-05-16 09:39:54.220469705 +0000 UTC m=+11.138418386" May 16 09:39:55.512107 update_engine[1505]: I20250516 09:39:55.511616 1505 update_attempter.cc:509] Updating boot flags... May 16 09:39:57.609792 systemd[1]: Created slice kubepods-besteffort-podc19d0cda_6641_4435_8f37_c91c7e2888c1.slice - libcontainer container kubepods-besteffort-podc19d0cda_6641_4435_8f37_c91c7e2888c1.slice. May 16 09:39:57.652470 systemd[1]: Created slice kubepods-besteffort-pod05373fef_e87e_4edb_828b_0945468024f8.slice - libcontainer container kubepods-besteffort-pod05373fef_e87e_4edb_828b_0945468024f8.slice. May 16 09:39:57.735710 kubelet[2634]: I0516 09:39:57.735667 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-var-run-calico\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.736336 kubelet[2634]: I0516 09:39:57.736156 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-cni-log-dir\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.736336 kubelet[2634]: I0516 09:39:57.736227 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmqts\" (UniqueName: \"kubernetes.io/projected/05373fef-e87e-4edb-828b-0945468024f8-kube-api-access-hmqts\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.736336 kubelet[2634]: I0516 09:39:57.736255 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c19d0cda-6641-4435-8f37-c91c7e2888c1-tigera-ca-bundle\") pod \"calico-typha-ffcc54665-6628k\" (UID: \"c19d0cda-6641-4435-8f37-c91c7e2888c1\") " pod="calico-system/calico-typha-ffcc54665-6628k" May 16 09:39:57.736336 kubelet[2634]: I0516 09:39:57.736272 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c19d0cda-6641-4435-8f37-c91c7e2888c1-typha-certs\") pod \"calico-typha-ffcc54665-6628k\" (UID: \"c19d0cda-6641-4435-8f37-c91c7e2888c1\") " pod="calico-system/calico-typha-ffcc54665-6628k" May 16 09:39:57.736336 kubelet[2634]: I0516 09:39:57.736291 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62w9t\" (UniqueName: \"kubernetes.io/projected/c19d0cda-6641-4435-8f37-c91c7e2888c1-kube-api-access-62w9t\") pod \"calico-typha-ffcc54665-6628k\" (UID: \"c19d0cda-6641-4435-8f37-c91c7e2888c1\") " pod="calico-system/calico-typha-ffcc54665-6628k" May 16 09:39:57.737246 kubelet[2634]: I0516 09:39:57.736307 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-policysync\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.737246 kubelet[2634]: I0516 09:39:57.736556 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-lib-modules\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.737246 kubelet[2634]: I0516 09:39:57.736656 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05373fef-e87e-4edb-828b-0945468024f8-tigera-ca-bundle\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.737600 kubelet[2634]: I0516 09:39:57.737382 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-flexvol-driver-host\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.737600 kubelet[2634]: I0516 09:39:57.737423 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05373fef-e87e-4edb-828b-0945468024f8-node-certs\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.737600 kubelet[2634]: I0516 09:39:57.737443 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-var-lib-calico\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.737600 kubelet[2634]: I0516 09:39:57.737462 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-cni-bin-dir\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.737600 kubelet[2634]: I0516 09:39:57.737482 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-cni-net-dir\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.737787 kubelet[2634]: I0516 09:39:57.737536 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05373fef-e87e-4edb-828b-0945468024f8-xtables-lock\") pod \"calico-node-dt9zh\" (UID: \"05373fef-e87e-4edb-828b-0945468024f8\") " pod="calico-system/calico-node-dt9zh" May 16 09:39:57.769479 kubelet[2634]: E0516 09:39:57.769369 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:39:57.837841 kubelet[2634]: I0516 09:39:57.837782 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xlt\" (UniqueName: \"kubernetes.io/projected/3e454e8c-f06e-47e5-848c-b0b3db2ddb78-kube-api-access-68xlt\") pod \"csi-node-driver-zfjpx\" (UID: \"3e454e8c-f06e-47e5-848c-b0b3db2ddb78\") " pod="calico-system/csi-node-driver-zfjpx" May 16 09:39:57.837957 kubelet[2634]: I0516 09:39:57.837941 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3e454e8c-f06e-47e5-848c-b0b3db2ddb78-varrun\") pod \"csi-node-driver-zfjpx\" (UID: \"3e454e8c-f06e-47e5-848c-b0b3db2ddb78\") " pod="calico-system/csi-node-driver-zfjpx" May 16 09:39:57.838014 kubelet[2634]: I0516 09:39:57.837978 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e454e8c-f06e-47e5-848c-b0b3db2ddb78-kubelet-dir\") pod \"csi-node-driver-zfjpx\" (UID: \"3e454e8c-f06e-47e5-848c-b0b3db2ddb78\") " pod="calico-system/csi-node-driver-zfjpx" May 16 09:39:57.838040 kubelet[2634]: I0516 09:39:57.838014 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3e454e8c-f06e-47e5-848c-b0b3db2ddb78-socket-dir\") pod \"csi-node-driver-zfjpx\" (UID: \"3e454e8c-f06e-47e5-848c-b0b3db2ddb78\") " pod="calico-system/csi-node-driver-zfjpx" May 16 09:39:57.840164 kubelet[2634]: E0516 09:39:57.840105 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.840437 kubelet[2634]: W0516 09:39:57.840396 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.840477 kubelet[2634]: E0516 09:39:57.840442 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.841187 kubelet[2634]: E0516 09:39:57.841103 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.841187 kubelet[2634]: W0516 09:39:57.841123 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.841187 kubelet[2634]: E0516 09:39:57.841143 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.841187 kubelet[2634]: I0516 09:39:57.841166 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3e454e8c-f06e-47e5-848c-b0b3db2ddb78-registration-dir\") pod \"csi-node-driver-zfjpx\" (UID: \"3e454e8c-f06e-47e5-848c-b0b3db2ddb78\") " pod="calico-system/csi-node-driver-zfjpx" May 16 09:39:57.841646 kubelet[2634]: E0516 09:39:57.841626 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.841693 kubelet[2634]: W0516 09:39:57.841648 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.841984 kubelet[2634]: E0516 09:39:57.841969 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.842071 kubelet[2634]: W0516 09:39:57.842055 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.842189 kubelet[2634]: E0516 09:39:57.842175 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.842435 kubelet[2634]: E0516 09:39:57.841667 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.844617 kubelet[2634]: E0516 09:39:57.843367 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.844617 kubelet[2634]: W0516 09:39:57.843534 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.844864 kubelet[2634]: E0516 09:39:57.844845 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.845496 kubelet[2634]: E0516 09:39:57.845454 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.845496 kubelet[2634]: W0516 09:39:57.845473 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.845885 kubelet[2634]: E0516 09:39:57.845702 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.847760 kubelet[2634]: E0516 09:39:57.847735 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.847760 kubelet[2634]: W0516 09:39:57.847757 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.848000 kubelet[2634]: E0516 09:39:57.847781 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.848049 kubelet[2634]: E0516 09:39:57.848031 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.848049 kubelet[2634]: W0516 09:39:57.848044 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.848134 kubelet[2634]: E0516 09:39:57.848060 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.848242 kubelet[2634]: E0516 09:39:57.848229 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.848242 kubelet[2634]: W0516 09:39:57.848241 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.848420 kubelet[2634]: E0516 09:39:57.848255 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.848420 kubelet[2634]: E0516 09:39:57.848404 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.848420 kubelet[2634]: W0516 09:39:57.848411 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.848420 kubelet[2634]: E0516 09:39:57.848418 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.849515 kubelet[2634]: E0516 09:39:57.849198 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.849515 kubelet[2634]: W0516 09:39:57.849213 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.849515 kubelet[2634]: E0516 09:39:57.849226 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.863512 kubelet[2634]: E0516 09:39:57.863255 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.863512 kubelet[2634]: W0516 09:39:57.863278 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.863512 kubelet[2634]: E0516 09:39:57.863311 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.864576 kubelet[2634]: E0516 09:39:57.864541 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.864782 kubelet[2634]: W0516 09:39:57.864563 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.864782 kubelet[2634]: E0516 09:39:57.864714 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.914278 containerd[1514]: time="2025-05-16T09:39:57.914233959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ffcc54665-6628k,Uid:c19d0cda-6641-4435-8f37-c91c7e2888c1,Namespace:calico-system,Attempt:0,}" May 16 09:39:57.944349 kubelet[2634]: E0516 09:39:57.944191 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.944349 kubelet[2634]: W0516 09:39:57.944217 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.944349 kubelet[2634]: E0516 09:39:57.944238 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.944756 kubelet[2634]: E0516 09:39:57.944461 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.944756 kubelet[2634]: W0516 09:39:57.944470 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.944756 kubelet[2634]: E0516 09:39:57.944487 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.944756 kubelet[2634]: E0516 09:39:57.944716 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.944756 kubelet[2634]: W0516 09:39:57.944725 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.944756 kubelet[2634]: E0516 09:39:57.944741 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.945595 kubelet[2634]: E0516 09:39:57.944933 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.945595 kubelet[2634]: W0516 09:39:57.944945 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.945595 kubelet[2634]: E0516 09:39:57.944959 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.945595 kubelet[2634]: E0516 09:39:57.945117 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.945595 kubelet[2634]: W0516 09:39:57.945138 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.945595 kubelet[2634]: E0516 09:39:57.945173 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.945595 kubelet[2634]: E0516 09:39:57.945414 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.945595 kubelet[2634]: W0516 09:39:57.945423 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.945595 kubelet[2634]: E0516 09:39:57.945441 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.945979 kubelet[2634]: E0516 09:39:57.945624 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.945979 kubelet[2634]: W0516 09:39:57.945633 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.945979 kubelet[2634]: E0516 09:39:57.945651 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.945979 kubelet[2634]: E0516 09:39:57.945812 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.945979 kubelet[2634]: W0516 09:39:57.945820 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.945979 kubelet[2634]: E0516 09:39:57.945854 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.946233 kubelet[2634]: E0516 09:39:57.945983 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.946233 kubelet[2634]: W0516 09:39:57.945992 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.946233 kubelet[2634]: E0516 09:39:57.946025 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.946233 kubelet[2634]: E0516 09:39:57.946216 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.946233 kubelet[2634]: W0516 09:39:57.946224 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.946438 kubelet[2634]: E0516 09:39:57.946241 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.946438 kubelet[2634]: E0516 09:39:57.946420 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.946438 kubelet[2634]: W0516 09:39:57.946428 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.946542 kubelet[2634]: E0516 09:39:57.946443 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.946672 kubelet[2634]: E0516 09:39:57.946658 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.946672 kubelet[2634]: W0516 09:39:57.946671 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.946721 kubelet[2634]: E0516 09:39:57.946684 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.946981 kubelet[2634]: E0516 09:39:57.946957 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.947062 kubelet[2634]: W0516 09:39:57.946982 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.947062 kubelet[2634]: E0516 09:39:57.947002 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.947357 kubelet[2634]: E0516 09:39:57.947341 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.947357 kubelet[2634]: W0516 09:39:57.947356 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.947412 kubelet[2634]: E0516 09:39:57.947373 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.947672 kubelet[2634]: E0516 09:39:57.947657 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.947705 kubelet[2634]: W0516 09:39:57.947678 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.947933 kubelet[2634]: E0516 09:39:57.947738 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.947933 kubelet[2634]: E0516 09:39:57.947831 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.947933 kubelet[2634]: W0516 09:39:57.947840 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.947933 kubelet[2634]: E0516 09:39:57.947880 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.948026 kubelet[2634]: E0516 09:39:57.948015 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.948122 kubelet[2634]: W0516 09:39:57.948028 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.948122 kubelet[2634]: E0516 09:39:57.948065 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.948285 kubelet[2634]: E0516 09:39:57.948272 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.948326 kubelet[2634]: W0516 09:39:57.948286 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.948326 kubelet[2634]: E0516 09:39:57.948307 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.948519 kubelet[2634]: E0516 09:39:57.948506 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.948519 kubelet[2634]: W0516 09:39:57.948518 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.948617 kubelet[2634]: E0516 09:39:57.948532 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.949646 kubelet[2634]: E0516 09:39:57.949617 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.949646 kubelet[2634]: W0516 09:39:57.949638 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.949749 kubelet[2634]: E0516 09:39:57.949658 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.949812 kubelet[2634]: E0516 09:39:57.949800 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.949812 kubelet[2634]: W0516 09:39:57.949811 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.949945 kubelet[2634]: E0516 09:39:57.949842 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.949979 kubelet[2634]: E0516 09:39:57.949967 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.949979 kubelet[2634]: W0516 09:39:57.949975 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.950989 kubelet[2634]: E0516 09:39:57.949991 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.950989 kubelet[2634]: E0516 09:39:57.950130 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.950989 kubelet[2634]: W0516 09:39:57.950137 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.950989 kubelet[2634]: E0516 09:39:57.950144 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.950989 kubelet[2634]: E0516 09:39:57.950393 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.950989 kubelet[2634]: W0516 09:39:57.950405 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.950989 kubelet[2634]: E0516 09:39:57.950416 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.950989 kubelet[2634]: E0516 09:39:57.950747 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.950989 kubelet[2634]: W0516 09:39:57.950760 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.950989 kubelet[2634]: E0516 09:39:57.950770 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.951222 containerd[1514]: time="2025-05-16T09:39:57.950134468Z" level=info msg="connecting to shim 6651aa558e1f4e44ca9378ef9769d6b81a377ec3f2c9a1cb14b69f9e8667b3d0" address="unix:///run/containerd/s/d7d8d7c6c349d1baca80c17738c3c12deaf76c625120c4ed5512e87b2dd0f245" namespace=k8s.io protocol=ttrpc version=3 May 16 09:39:57.957776 containerd[1514]: time="2025-05-16T09:39:57.957707756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dt9zh,Uid:05373fef-e87e-4edb-828b-0945468024f8,Namespace:calico-system,Attempt:0,}" May 16 09:39:57.969675 kubelet[2634]: E0516 09:39:57.969648 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:39:57.969875 kubelet[2634]: W0516 09:39:57.969814 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:39:57.969875 kubelet[2634]: E0516 09:39:57.969840 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:39:57.980716 containerd[1514]: time="2025-05-16T09:39:57.980597806Z" level=info msg="connecting to shim 5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4" address="unix:///run/containerd/s/0a0e228f010a1f77612ca3335a0f2a7e932f43378909dbb6a3d17ddf4d340f07" namespace=k8s.io protocol=ttrpc version=3 May 16 09:39:58.023816 systemd[1]: Started cri-containerd-6651aa558e1f4e44ca9378ef9769d6b81a377ec3f2c9a1cb14b69f9e8667b3d0.scope - libcontainer container 6651aa558e1f4e44ca9378ef9769d6b81a377ec3f2c9a1cb14b69f9e8667b3d0. May 16 09:39:58.027964 systemd[1]: Started cri-containerd-5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4.scope - libcontainer container 5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4. May 16 09:39:58.070633 containerd[1514]: time="2025-05-16T09:39:58.070534000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dt9zh,Uid:05373fef-e87e-4edb-828b-0945468024f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4\"" May 16 09:39:58.075812 containerd[1514]: time="2025-05-16T09:39:58.075726709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ffcc54665-6628k,Uid:c19d0cda-6641-4435-8f37-c91c7e2888c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6651aa558e1f4e44ca9378ef9769d6b81a377ec3f2c9a1cb14b69f9e8667b3d0\"" May 16 09:39:58.079000 containerd[1514]: time="2025-05-16T09:39:58.078873443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 16 09:39:59.173065 kubelet[2634]: E0516 09:39:59.173010 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:01.173015 kubelet[2634]: E0516 09:40:01.172964 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:03.172920 kubelet[2634]: E0516 09:40:03.172848 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:04.502004 containerd[1514]: time="2025-05-16T09:40:04.501959254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:04.502668 containerd[1514]: time="2025-05-16T09:40:04.502622494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 16 09:40:04.503906 containerd[1514]: time="2025-05-16T09:40:04.503616492Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:04.505253 containerd[1514]: time="2025-05-16T09:40:04.505224444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:04.506390 containerd[1514]: time="2025-05-16T09:40:04.506363740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 6.427260823s" May 16 09:40:04.506499 containerd[1514]: time="2025-05-16T09:40:04.506480034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 16 09:40:04.507550 containerd[1514]: time="2025-05-16T09:40:04.507527599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 16 09:40:04.509623 containerd[1514]: time="2025-05-16T09:40:04.508994294Z" level=info msg="CreateContainer within sandbox \"5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 09:40:04.518604 containerd[1514]: time="2025-05-16T09:40:04.517755379Z" level=info msg="Container 26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:04.524974 containerd[1514]: time="2025-05-16T09:40:04.524941516Z" level=info msg="CreateContainer within sandbox \"5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31\"" May 16 09:40:04.525697 containerd[1514]: time="2025-05-16T09:40:04.525672843Z" level=info msg="StartContainer for \"26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31\"" May 16 09:40:04.527083 containerd[1514]: time="2025-05-16T09:40:04.527056328Z" level=info msg="connecting to shim 26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31" address="unix:///run/containerd/s/0a0e228f010a1f77612ca3335a0f2a7e932f43378909dbb6a3d17ddf4d340f07" protocol=ttrpc version=3 May 16 09:40:04.609772 systemd[1]: Started cri-containerd-26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31.scope - libcontainer container 26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31. May 16 09:40:04.644904 containerd[1514]: time="2025-05-16T09:40:04.644843660Z" level=info msg="StartContainer for \"26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31\" returns successfully" May 16 09:40:04.681070 systemd[1]: cri-containerd-26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31.scope: Deactivated successfully. May 16 09:40:04.682239 systemd[1]: cri-containerd-26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31.scope: Consumed 53ms CPU time, 8.1M memory peak, 6.2M written to disk. May 16 09:40:04.714600 containerd[1514]: time="2025-05-16T09:40:04.714477207Z" level=info msg="received exit event container_id:\"26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31\" id:\"26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31\" pid:3192 exited_at:{seconds:1747388404 nanos:699304917}" May 16 09:40:04.717952 containerd[1514]: time="2025-05-16T09:40:04.717910457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31\" id:\"26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31\" pid:3192 exited_at:{seconds:1747388404 nanos:699304917}" May 16 09:40:04.745250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26dc0270512c90936eac0f8ed014fd1f55fd59c2d9a4450534fb9050da324b31-rootfs.mount: Deactivated successfully. May 16 09:40:05.173296 kubelet[2634]: E0516 09:40:05.173245 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:07.173761 kubelet[2634]: E0516 09:40:07.173700 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:09.137012 containerd[1514]: time="2025-05-16T09:40:09.136973863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:09.137436 containerd[1514]: time="2025-05-16T09:40:09.137412347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 16 09:40:09.138223 containerd[1514]: time="2025-05-16T09:40:09.138194507Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:09.140043 containerd[1514]: time="2025-05-16T09:40:09.140002091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:09.140479 containerd[1514]: time="2025-05-16T09:40:09.140447576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 4.632888734s" May 16 09:40:09.140479 containerd[1514]: time="2025-05-16T09:40:09.140477419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 16 09:40:09.141908 containerd[1514]: time="2025-05-16T09:40:09.141753269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 16 09:40:09.153505 containerd[1514]: time="2025-05-16T09:40:09.153459901Z" level=info msg="CreateContainer within sandbox \"6651aa558e1f4e44ca9378ef9769d6b81a377ec3f2c9a1cb14b69f9e8667b3d0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 09:40:09.162763 containerd[1514]: time="2025-05-16T09:40:09.162731885Z" level=info msg="Container 3ff3d856f3a69fe8057352c7f2dc3eb3208dc03be6d1aa1f88a3be95e8316c7f: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:09.168573 containerd[1514]: time="2025-05-16T09:40:09.168515393Z" level=info msg="CreateContainer within sandbox \"6651aa558e1f4e44ca9378ef9769d6b81a377ec3f2c9a1cb14b69f9e8667b3d0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3ff3d856f3a69fe8057352c7f2dc3eb3208dc03be6d1aa1f88a3be95e8316c7f\"" May 16 09:40:09.168976 containerd[1514]: time="2025-05-16T09:40:09.168946197Z" level=info msg="StartContainer for \"3ff3d856f3a69fe8057352c7f2dc3eb3208dc03be6d1aa1f88a3be95e8316c7f\"" May 16 09:40:09.170290 containerd[1514]: time="2025-05-16T09:40:09.170264331Z" level=info msg="connecting to shim 3ff3d856f3a69fe8057352c7f2dc3eb3208dc03be6d1aa1f88a3be95e8316c7f" address="unix:///run/containerd/s/d7d8d7c6c349d1baca80c17738c3c12deaf76c625120c4ed5512e87b2dd0f245" protocol=ttrpc version=3 May 16 09:40:09.173610 kubelet[2634]: E0516 09:40:09.173345 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:09.192736 systemd[1]: Started cri-containerd-3ff3d856f3a69fe8057352c7f2dc3eb3208dc03be6d1aa1f88a3be95e8316c7f.scope - libcontainer container 3ff3d856f3a69fe8057352c7f2dc3eb3208dc03be6d1aa1f88a3be95e8316c7f. May 16 09:40:09.233909 containerd[1514]: time="2025-05-16T09:40:09.233872806Z" level=info msg="StartContainer for \"3ff3d856f3a69fe8057352c7f2dc3eb3208dc03be6d1aa1f88a3be95e8316c7f\" returns successfully" May 16 09:40:10.243633 kubelet[2634]: I0516 09:40:10.243569 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 09:40:11.172874 kubelet[2634]: E0516 09:40:11.172816 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:13.173273 kubelet[2634]: E0516 09:40:13.173232 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:13.776107 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:34502.service - OpenSSH per-connection server daemon (10.0.0.1:34502). May 16 09:40:13.842397 sshd[3275]: Accepted publickey for core from 10.0.0.1 port 34502 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:13.843543 sshd-session[3275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:13.848790 systemd-logind[1496]: New session 8 of user core. May 16 09:40:13.855715 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 09:40:13.975887 sshd[3277]: Connection closed by 10.0.0.1 port 34502 May 16 09:40:13.976296 sshd-session[3275]: pam_unix(sshd:session): session closed for user core May 16 09:40:13.979515 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:34502.service: Deactivated successfully. May 16 09:40:13.981278 systemd[1]: session-8.scope: Deactivated successfully. May 16 09:40:13.982641 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. May 16 09:40:13.988041 systemd-logind[1496]: Removed session 8. May 16 09:40:14.460042 kubelet[2634]: I0516 09:40:14.460010 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 09:40:14.471556 kubelet[2634]: I0516 09:40:14.471472 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ffcc54665-6628k" podStartSLOduration=6.408337916 podStartE2EDuration="17.471446355s" podCreationTimestamp="2025-05-16 09:39:57 +0000 UTC" firstStartedPulling="2025-05-16 09:39:58.078173822 +0000 UTC m=+14.996122503" lastFinishedPulling="2025-05-16 09:40:09.141282261 +0000 UTC m=+26.059230942" observedRunningTime="2025-05-16 09:40:09.251316102 +0000 UTC m=+26.169264783" watchObservedRunningTime="2025-05-16 09:40:14.471446355 +0000 UTC m=+31.389395036" May 16 09:40:15.174171 kubelet[2634]: E0516 09:40:15.173628 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:15.673420 containerd[1514]: time="2025-05-16T09:40:15.673375156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:15.674214 containerd[1514]: time="2025-05-16T09:40:15.674181304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 16 09:40:15.675204 containerd[1514]: time="2025-05-16T09:40:15.675159546Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:15.677409 containerd[1514]: time="2025-05-16T09:40:15.677363332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:15.678078 containerd[1514]: time="2025-05-16T09:40:15.677970263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 6.53608506s" May 16 09:40:15.678078 containerd[1514]: time="2025-05-16T09:40:15.677999465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 16 09:40:15.681457 containerd[1514]: time="2025-05-16T09:40:15.681427393Z" level=info msg="CreateContainer within sandbox \"5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 09:40:15.688486 containerd[1514]: time="2025-05-16T09:40:15.688459025Z" level=info msg="Container 67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:15.695785 containerd[1514]: time="2025-05-16T09:40:15.695703475Z" level=info msg="CreateContainer within sandbox \"5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6\"" May 16 09:40:15.696125 containerd[1514]: time="2025-05-16T09:40:15.696101108Z" level=info msg="StartContainer for \"67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6\"" May 16 09:40:15.703866 containerd[1514]: time="2025-05-16T09:40:15.703830478Z" level=info msg="connecting to shim 67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6" address="unix:///run/containerd/s/0a0e228f010a1f77612ca3335a0f2a7e932f43378909dbb6a3d17ddf4d340f07" protocol=ttrpc version=3 May 16 09:40:15.723780 systemd[1]: Started cri-containerd-67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6.scope - libcontainer container 67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6. May 16 09:40:15.765608 containerd[1514]: time="2025-05-16T09:40:15.765537710Z" level=info msg="StartContainer for \"67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6\" returns successfully" May 16 09:40:16.252047 systemd[1]: cri-containerd-67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6.scope: Deactivated successfully. May 16 09:40:16.252326 systemd[1]: cri-containerd-67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6.scope: Consumed 457ms CPU time, 166.6M memory peak, 4K read from disk, 150.3M written to disk. May 16 09:40:16.261749 containerd[1514]: time="2025-05-16T09:40:16.261710372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6\" id:\"67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6\" pid:3315 exited_at:{seconds:1747388416 nanos:260799978}" May 16 09:40:16.266505 containerd[1514]: time="2025-05-16T09:40:16.266419036Z" level=info msg="received exit event container_id:\"67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6\" id:\"67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6\" pid:3315 exited_at:{seconds:1747388416 nanos:260799978}" May 16 09:40:16.288094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67b03c10bec5739f0ac60746df78e46d59a1e2692c46148a9fbcdea920a285e6-rootfs.mount: Deactivated successfully. May 16 09:40:16.300610 kubelet[2634]: I0516 09:40:16.300357 2634 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 09:40:16.351878 systemd[1]: Created slice kubepods-besteffort-podc67ce86b_05fd_4101_9eb9_8a3abdef04eb.slice - libcontainer container kubepods-besteffort-podc67ce86b_05fd_4101_9eb9_8a3abdef04eb.slice. May 16 09:40:16.359868 systemd[1]: Created slice kubepods-besteffort-pod1ff10a03_7281_4b2c_957e_df9b401ce60e.slice - libcontainer container kubepods-besteffort-pod1ff10a03_7281_4b2c_957e_df9b401ce60e.slice. May 16 09:40:16.369855 systemd[1]: Created slice kubepods-burstable-pod595d9a03_d9ed_42ac_938e_abc42fcc06c0.slice - libcontainer container kubepods-burstable-pod595d9a03_d9ed_42ac_938e_abc42fcc06c0.slice. May 16 09:40:16.375856 systemd[1]: Created slice kubepods-burstable-pod047fb491_99ce_45e8_a0a7_c50febff27c0.slice - libcontainer container kubepods-burstable-pod047fb491_99ce_45e8_a0a7_c50febff27c0.slice. May 16 09:40:16.380192 kubelet[2634]: I0516 09:40:16.380059 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/047fb491-99ce-45e8-a0a7-c50febff27c0-config-volume\") pod \"coredns-6f6b679f8f-x7nd6\" (UID: \"047fb491-99ce-45e8-a0a7-c50febff27c0\") " pod="kube-system/coredns-6f6b679f8f-x7nd6" May 16 09:40:16.380669 kubelet[2634]: I0516 09:40:16.380646 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9346848e-d416-4e64-93cc-dfc68fe9bf99-calico-apiserver-certs\") pod \"calico-apiserver-7478f7b79b-7sqsw\" (UID: \"9346848e-d416-4e64-93cc-dfc68fe9bf99\") " pod="calico-apiserver/calico-apiserver-7478f7b79b-7sqsw" May 16 09:40:16.380965 kubelet[2634]: I0516 09:40:16.380858 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjd7g\" (UniqueName: \"kubernetes.io/projected/1ff10a03-7281-4b2c-957e-df9b401ce60e-kube-api-access-wjd7g\") pod \"calico-apiserver-7478f7b79b-mpbwm\" (UID: \"1ff10a03-7281-4b2c-957e-df9b401ce60e\") " pod="calico-apiserver/calico-apiserver-7478f7b79b-mpbwm" May 16 09:40:16.381152 kubelet[2634]: I0516 09:40:16.381065 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c67ce86b-05fd-4101-9eb9-8a3abdef04eb-tigera-ca-bundle\") pod \"calico-kube-controllers-757d5fc7b7-4p8nh\" (UID: \"c67ce86b-05fd-4101-9eb9-8a3abdef04eb\") " pod="calico-system/calico-kube-controllers-757d5fc7b7-4p8nh" May 16 09:40:16.381681 kubelet[2634]: I0516 09:40:16.381650 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ff10a03-7281-4b2c-957e-df9b401ce60e-calico-apiserver-certs\") pod \"calico-apiserver-7478f7b79b-mpbwm\" (UID: \"1ff10a03-7281-4b2c-957e-df9b401ce60e\") " pod="calico-apiserver/calico-apiserver-7478f7b79b-mpbwm" May 16 09:40:16.382725 systemd[1]: Created slice kubepods-besteffort-pod9346848e_d416_4e64_93cc_dfc68fe9bf99.slice - libcontainer container kubepods-besteffort-pod9346848e_d416_4e64_93cc_dfc68fe9bf99.slice. May 16 09:40:16.385850 kubelet[2634]: I0516 09:40:16.385808 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/595d9a03-d9ed-42ac-938e-abc42fcc06c0-config-volume\") pod \"coredns-6f6b679f8f-m88r4\" (UID: \"595d9a03-d9ed-42ac-938e-abc42fcc06c0\") " pod="kube-system/coredns-6f6b679f8f-m88r4" May 16 09:40:16.385954 kubelet[2634]: I0516 09:40:16.385940 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26h2m\" (UniqueName: \"kubernetes.io/projected/595d9a03-d9ed-42ac-938e-abc42fcc06c0-kube-api-access-26h2m\") pod \"coredns-6f6b679f8f-m88r4\" (UID: \"595d9a03-d9ed-42ac-938e-abc42fcc06c0\") " pod="kube-system/coredns-6f6b679f8f-m88r4" May 16 09:40:16.386050 kubelet[2634]: I0516 09:40:16.386034 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8xvc\" (UniqueName: \"kubernetes.io/projected/c67ce86b-05fd-4101-9eb9-8a3abdef04eb-kube-api-access-k8xvc\") pod \"calico-kube-controllers-757d5fc7b7-4p8nh\" (UID: \"c67ce86b-05fd-4101-9eb9-8a3abdef04eb\") " pod="calico-system/calico-kube-controllers-757d5fc7b7-4p8nh" May 16 09:40:16.386111 kubelet[2634]: I0516 09:40:16.386100 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszb4\" (UniqueName: \"kubernetes.io/projected/047fb491-99ce-45e8-a0a7-c50febff27c0-kube-api-access-kszb4\") pod \"coredns-6f6b679f8f-x7nd6\" (UID: \"047fb491-99ce-45e8-a0a7-c50febff27c0\") " pod="kube-system/coredns-6f6b679f8f-x7nd6" May 16 09:40:16.386200 kubelet[2634]: I0516 09:40:16.386178 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh7kl\" (UniqueName: \"kubernetes.io/projected/9346848e-d416-4e64-93cc-dfc68fe9bf99-kube-api-access-rh7kl\") pod \"calico-apiserver-7478f7b79b-7sqsw\" (UID: \"9346848e-d416-4e64-93cc-dfc68fe9bf99\") " pod="calico-apiserver/calico-apiserver-7478f7b79b-7sqsw" May 16 09:40:16.657706 containerd[1514]: time="2025-05-16T09:40:16.657609960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d5fc7b7-4p8nh,Uid:c67ce86b-05fd-4101-9eb9-8a3abdef04eb,Namespace:calico-system,Attempt:0,}" May 16 09:40:16.663610 containerd[1514]: time="2025-05-16T09:40:16.663445076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7478f7b79b-mpbwm,Uid:1ff10a03-7281-4b2c-957e-df9b401ce60e,Namespace:calico-apiserver,Attempt:0,}" May 16 09:40:16.683733 containerd[1514]: time="2025-05-16T09:40:16.682658362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x7nd6,Uid:047fb491-99ce-45e8-a0a7-c50febff27c0,Namespace:kube-system,Attempt:0,}" May 16 09:40:16.683733 containerd[1514]: time="2025-05-16T09:40:16.682714446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m88r4,Uid:595d9a03-d9ed-42ac-938e-abc42fcc06c0,Namespace:kube-system,Attempt:0,}" May 16 09:40:16.691245 containerd[1514]: time="2025-05-16T09:40:16.690707298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7478f7b79b-7sqsw,Uid:9346848e-d416-4e64-93cc-dfc68fe9bf99,Namespace:calico-apiserver,Attempt:0,}" May 16 09:40:17.051867 containerd[1514]: time="2025-05-16T09:40:17.051751557Z" level=error msg="Failed to destroy network for sandbox \"fc941bc8f0e868c17341d99835f2c045ce9d133bd581271cad62e9df533e2398\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.053513 systemd[1]: run-netns-cni\x2d2abb7b9d\x2d3d48\x2d1748\x2de392\x2da971981ba01b.mount: Deactivated successfully. May 16 09:40:17.054564 containerd[1514]: time="2025-05-16T09:40:17.053629506Z" level=error msg="Failed to destroy network for sandbox \"21a481182c590bb10f73bde9620e088b03ed20f5e6212f67f6c47da960feb872\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.057395 containerd[1514]: time="2025-05-16T09:40:17.057359160Z" level=error msg="Failed to destroy network for sandbox \"d55952621abeb21a139dad2e5f794d4ebaab45fb96a398892b6887d07f69dff4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.061088 containerd[1514]: time="2025-05-16T09:40:17.061044371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d5fc7b7-4p8nh,Uid:c67ce86b-05fd-4101-9eb9-8a3abdef04eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21a481182c590bb10f73bde9620e088b03ed20f5e6212f67f6c47da960feb872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.061981 containerd[1514]: time="2025-05-16T09:40:17.061910680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7478f7b79b-mpbwm,Uid:1ff10a03-7281-4b2c-957e-df9b401ce60e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc941bc8f0e868c17341d99835f2c045ce9d133bd581271cad62e9df533e2398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.062067 kubelet[2634]: E0516 09:40:17.061966 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21a481182c590bb10f73bde9620e088b03ed20f5e6212f67f6c47da960feb872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.062067 kubelet[2634]: E0516 09:40:17.062049 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21a481182c590bb10f73bde9620e088b03ed20f5e6212f67f6c47da960feb872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757d5fc7b7-4p8nh" May 16 09:40:17.062323 kubelet[2634]: E0516 09:40:17.062265 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc941bc8f0e868c17341d99835f2c045ce9d133bd581271cad62e9df533e2398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.062378 kubelet[2634]: E0516 09:40:17.062364 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc941bc8f0e868c17341d99835f2c045ce9d133bd581271cad62e9df533e2398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7478f7b79b-mpbwm" May 16 09:40:17.062928 containerd[1514]: time="2025-05-16T09:40:17.062896477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7478f7b79b-7sqsw,Uid:9346848e-d416-4e64-93cc-dfc68fe9bf99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d55952621abeb21a139dad2e5f794d4ebaab45fb96a398892b6887d07f69dff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.063243 kubelet[2634]: E0516 09:40:17.063137 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d55952621abeb21a139dad2e5f794d4ebaab45fb96a398892b6887d07f69dff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.063243 kubelet[2634]: E0516 09:40:17.063204 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d55952621abeb21a139dad2e5f794d4ebaab45fb96a398892b6887d07f69dff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7478f7b79b-7sqsw" May 16 09:40:17.064383 containerd[1514]: time="2025-05-16T09:40:17.064353512Z" level=error msg="Failed to destroy network for sandbox \"bfee2a9df7f43419447fe2b7d129e86580f34b29b9c8fe1822fa66b664e9b6be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.065047 kubelet[2634]: E0516 09:40:17.065016 2634 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d55952621abeb21a139dad2e5f794d4ebaab45fb96a398892b6887d07f69dff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7478f7b79b-7sqsw" May 16 09:40:17.065565 containerd[1514]: time="2025-05-16T09:40:17.065173777Z" level=error msg="Failed to destroy network for sandbox \"4c3b73d27fdb1fd842ac9af8984d8530d31f85a3270a49b3288b3dca59141194\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.065687 kubelet[2634]: E0516 09:40:17.065019 2634 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc941bc8f0e868c17341d99835f2c045ce9d133bd581271cad62e9df533e2398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7478f7b79b-mpbwm" May 16 09:40:17.065687 kubelet[2634]: E0516 09:40:17.065290 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7478f7b79b-7sqsw_calico-apiserver(9346848e-d416-4e64-93cc-dfc68fe9bf99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7478f7b79b-7sqsw_calico-apiserver(9346848e-d416-4e64-93cc-dfc68fe9bf99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d55952621abeb21a139dad2e5f794d4ebaab45fb96a398892b6887d07f69dff4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7478f7b79b-7sqsw" podUID="9346848e-d416-4e64-93cc-dfc68fe9bf99" May 16 09:40:17.065782 kubelet[2634]: E0516 09:40:17.065339 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7478f7b79b-mpbwm_calico-apiserver(1ff10a03-7281-4b2c-957e-df9b401ce60e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7478f7b79b-mpbwm_calico-apiserver(1ff10a03-7281-4b2c-957e-df9b401ce60e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc941bc8f0e868c17341d99835f2c045ce9d133bd581271cad62e9df533e2398\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7478f7b79b-mpbwm" podUID="1ff10a03-7281-4b2c-957e-df9b401ce60e" May 16 09:40:17.065782 kubelet[2634]: E0516 09:40:17.065405 2634 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21a481182c590bb10f73bde9620e088b03ed20f5e6212f67f6c47da960feb872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757d5fc7b7-4p8nh" May 16 09:40:17.065869 kubelet[2634]: E0516 09:40:17.065460 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-757d5fc7b7-4p8nh_calico-system(c67ce86b-05fd-4101-9eb9-8a3abdef04eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-757d5fc7b7-4p8nh_calico-system(c67ce86b-05fd-4101-9eb9-8a3abdef04eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21a481182c590bb10f73bde9620e088b03ed20f5e6212f67f6c47da960feb872\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-757d5fc7b7-4p8nh" podUID="c67ce86b-05fd-4101-9eb9-8a3abdef04eb" May 16 09:40:17.065940 containerd[1514]: time="2025-05-16T09:40:17.065401675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m88r4,Uid:595d9a03-d9ed-42ac-938e-abc42fcc06c0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfee2a9df7f43419447fe2b7d129e86580f34b29b9c8fe1822fa66b664e9b6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.065980 kubelet[2634]: E0516 09:40:17.065947 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfee2a9df7f43419447fe2b7d129e86580f34b29b9c8fe1822fa66b664e9b6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.066018 kubelet[2634]: E0516 09:40:17.065984 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfee2a9df7f43419447fe2b7d129e86580f34b29b9c8fe1822fa66b664e9b6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-m88r4" May 16 09:40:17.066018 kubelet[2634]: E0516 09:40:17.066000 2634 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfee2a9df7f43419447fe2b7d129e86580f34b29b9c8fe1822fa66b664e9b6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-m88r4" May 16 09:40:17.066077 kubelet[2634]: E0516 09:40:17.066030 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-m88r4_kube-system(595d9a03-d9ed-42ac-938e-abc42fcc06c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-m88r4_kube-system(595d9a03-d9ed-42ac-938e-abc42fcc06c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfee2a9df7f43419447fe2b7d129e86580f34b29b9c8fe1822fa66b664e9b6be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-m88r4" podUID="595d9a03-d9ed-42ac-938e-abc42fcc06c0" May 16 09:40:17.066906 containerd[1514]: time="2025-05-16T09:40:17.066861630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x7nd6,Uid:047fb491-99ce-45e8-a0a7-c50febff27c0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3b73d27fdb1fd842ac9af8984d8530d31f85a3270a49b3288b3dca59141194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.067096 kubelet[2634]: E0516 09:40:17.067064 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3b73d27fdb1fd842ac9af8984d8530d31f85a3270a49b3288b3dca59141194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.067146 kubelet[2634]: E0516 09:40:17.067130 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3b73d27fdb1fd842ac9af8984d8530d31f85a3270a49b3288b3dca59141194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-x7nd6" May 16 09:40:17.067173 kubelet[2634]: E0516 09:40:17.067150 2634 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3b73d27fdb1fd842ac9af8984d8530d31f85a3270a49b3288b3dca59141194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-x7nd6" May 16 09:40:17.067214 kubelet[2634]: E0516 09:40:17.067194 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-x7nd6_kube-system(047fb491-99ce-45e8-a0a7-c50febff27c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-x7nd6_kube-system(047fb491-99ce-45e8-a0a7-c50febff27c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c3b73d27fdb1fd842ac9af8984d8530d31f85a3270a49b3288b3dca59141194\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-x7nd6" podUID="047fb491-99ce-45e8-a0a7-c50febff27c0" May 16 09:40:17.178389 systemd[1]: Created slice kubepods-besteffort-pod3e454e8c_f06e_47e5_848c_b0b3db2ddb78.slice - libcontainer container kubepods-besteffort-pod3e454e8c_f06e_47e5_848c_b0b3db2ddb78.slice. May 16 09:40:17.180507 containerd[1514]: time="2025-05-16T09:40:17.180243823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zfjpx,Uid:3e454e8c-f06e-47e5-848c-b0b3db2ddb78,Namespace:calico-system,Attempt:0,}" May 16 09:40:17.222989 containerd[1514]: time="2025-05-16T09:40:17.222931234Z" level=error msg="Failed to destroy network for sandbox \"3e7f47b18dd33c3612e401f310c602ec62d5a408aa7f2ad8c72494ff74220d3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.223882 containerd[1514]: time="2025-05-16T09:40:17.223844826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zfjpx,Uid:3e454e8c-f06e-47e5-848c-b0b3db2ddb78,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7f47b18dd33c3612e401f310c602ec62d5a408aa7f2ad8c72494ff74220d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.224081 kubelet[2634]: E0516 09:40:17.224048 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7f47b18dd33c3612e401f310c602ec62d5a408aa7f2ad8c72494ff74220d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:40:17.224130 kubelet[2634]: E0516 09:40:17.224104 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7f47b18dd33c3612e401f310c602ec62d5a408aa7f2ad8c72494ff74220d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zfjpx" May 16 09:40:17.224160 kubelet[2634]: E0516 09:40:17.224122 2634 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7f47b18dd33c3612e401f310c602ec62d5a408aa7f2ad8c72494ff74220d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zfjpx" May 16 09:40:17.224190 kubelet[2634]: E0516 09:40:17.224171 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zfjpx_calico-system(3e454e8c-f06e-47e5-848c-b0b3db2ddb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zfjpx_calico-system(3e454e8c-f06e-47e5-848c-b0b3db2ddb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e7f47b18dd33c3612e401f310c602ec62d5a408aa7f2ad8c72494ff74220d3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zfjpx" podUID="3e454e8c-f06e-47e5-848c-b0b3db2ddb78" May 16 09:40:17.269976 containerd[1514]: time="2025-05-16T09:40:17.269673444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 16 09:40:17.689439 systemd[1]: run-netns-cni\x2d9a01a565\x2db076\x2dac93\x2d7ecb\x2d4a3fd9d28eeb.mount: Deactivated successfully. May 16 09:40:17.689526 systemd[1]: run-netns-cni\x2df683ca34\x2d60b5\x2d54c9\x2dbcc6\x2d3bc5ec9f17a8.mount: Deactivated successfully. May 16 09:40:17.689571 systemd[1]: run-netns-cni\x2dd324538c\x2db43b\x2d02b0\x2d38d7\x2d247412bd6e7e.mount: Deactivated successfully. May 16 09:40:17.689651 systemd[1]: run-netns-cni\x2dc8a63e14\x2d4a36\x2d80a1\x2d1043\x2d59c69c5f912b.mount: Deactivated successfully. May 16 09:40:18.991736 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:34516.service - OpenSSH per-connection server daemon (10.0.0.1:34516). May 16 09:40:19.037626 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 34516 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:19.038577 sshd-session[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:19.042039 systemd-logind[1496]: New session 9 of user core. May 16 09:40:19.048708 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 09:40:19.153922 sshd[3578]: Connection closed by 10.0.0.1 port 34516 May 16 09:40:19.154566 sshd-session[3576]: pam_unix(sshd:session): session closed for user core May 16 09:40:19.157802 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:34516.service: Deactivated successfully. May 16 09:40:19.159478 systemd[1]: session-9.scope: Deactivated successfully. May 16 09:40:19.160279 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. May 16 09:40:19.161422 systemd-logind[1496]: Removed session 9. May 16 09:40:22.540563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586631799.mount: Deactivated successfully. May 16 09:40:22.790735 containerd[1514]: time="2025-05-16T09:40:22.790626117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:22.808218 containerd[1514]: time="2025-05-16T09:40:22.791594703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 16 09:40:22.808218 containerd[1514]: time="2025-05-16T09:40:22.792505524Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:22.808354 containerd[1514]: time="2025-05-16T09:40:22.807608422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 5.537850331s" May 16 09:40:22.808354 containerd[1514]: time="2025-05-16T09:40:22.808327070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 16 09:40:22.808829 containerd[1514]: time="2025-05-16T09:40:22.808760179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:22.827761 containerd[1514]: time="2025-05-16T09:40:22.827701055Z" level=info msg="CreateContainer within sandbox \"5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 09:40:22.860536 containerd[1514]: time="2025-05-16T09:40:22.860058035Z" level=info msg="Container 7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:22.868930 containerd[1514]: time="2025-05-16T09:40:22.868888070Z" level=info msg="CreateContainer within sandbox \"5e1b2e9b8852ae19c06c3efd665037090b4b590844213a768fff193245a4a0a4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa\"" May 16 09:40:22.869505 containerd[1514]: time="2025-05-16T09:40:22.869455508Z" level=info msg="StartContainer for \"7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa\"" May 16 09:40:22.871686 containerd[1514]: time="2025-05-16T09:40:22.871657817Z" level=info msg="connecting to shim 7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa" address="unix:///run/containerd/s/0a0e228f010a1f77612ca3335a0f2a7e932f43378909dbb6a3d17ddf4d340f07" protocol=ttrpc version=3 May 16 09:40:22.903759 systemd[1]: Started cri-containerd-7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa.scope - libcontainer container 7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa. May 16 09:40:22.938013 containerd[1514]: time="2025-05-16T09:40:22.937955443Z" level=info msg="StartContainer for \"7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa\" returns successfully" May 16 09:40:23.115356 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 09:40:23.115468 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 09:40:23.317892 kubelet[2634]: I0516 09:40:23.317004 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dt9zh" podStartSLOduration=1.5823825820000001 podStartE2EDuration="26.316988394s" podCreationTimestamp="2025-05-16 09:39:57 +0000 UTC" firstStartedPulling="2025-05-16 09:39:58.074740767 +0000 UTC m=+14.992689408" lastFinishedPulling="2025-05-16 09:40:22.809346539 +0000 UTC m=+39.727295220" observedRunningTime="2025-05-16 09:40:23.316867346 +0000 UTC m=+40.234816027" watchObservedRunningTime="2025-05-16 09:40:23.316988394 +0000 UTC m=+40.234937075" May 16 09:40:23.408895 containerd[1514]: time="2025-05-16T09:40:23.408861230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa\" id:\"9d9d1c95367cd557988c1775de060dcfb66e8ed8083547cb36dede13a8fd80dd\" pid:3665 exit_status:1 exited_at:{seconds:1747388423 nanos:408520688}" May 16 09:40:24.168943 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:57642.service - OpenSSH per-connection server daemon (10.0.0.1:57642). May 16 09:40:24.235103 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 57642 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:24.236611 sshd-session[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:24.241368 systemd-logind[1496]: New session 10 of user core. May 16 09:40:24.252726 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 09:40:24.375832 sshd[3687]: Connection closed by 10.0.0.1 port 57642 May 16 09:40:24.376154 containerd[1514]: time="2025-05-16T09:40:24.376023187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa\" id:\"5d2f6e90294f3e24bbf38889fc9f193c264caf60db0bf56d2c952843e822f3c5\" pid:3709 exit_status:1 exited_at:{seconds:1747388424 nanos:375749410}" May 16 09:40:24.376249 sshd-session[3685]: pam_unix(sshd:session): session closed for user core May 16 09:40:24.380466 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:57642.service: Deactivated successfully. May 16 09:40:24.383171 systemd[1]: session-10.scope: Deactivated successfully. May 16 09:40:24.383886 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. May 16 09:40:24.384872 systemd-logind[1496]: Removed session 10. May 16 09:40:24.724336 systemd-networkd[1429]: vxlan.calico: Link UP May 16 09:40:24.724344 systemd-networkd[1429]: vxlan.calico: Gained carrier May 16 09:40:25.368810 containerd[1514]: time="2025-05-16T09:40:25.368759550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa\" id:\"e4719470241b8901d1a0602aed04a5daf14a0b5b7d22f0e37441e70c603ddf45\" pid:3930 exit_status:1 exited_at:{seconds:1747388425 nanos:363499187}" May 16 09:40:26.300716 systemd-networkd[1429]: vxlan.calico: Gained IPv6LL May 16 09:40:28.174080 containerd[1514]: time="2025-05-16T09:40:28.173803713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m88r4,Uid:595d9a03-d9ed-42ac-938e-abc42fcc06c0,Namespace:kube-system,Attempt:0,}" May 16 09:40:28.464892 systemd-networkd[1429]: cali48c1ec3eef0: Link UP May 16 09:40:28.465053 systemd-networkd[1429]: cali48c1ec3eef0: Gained carrier May 16 09:40:28.478605 containerd[1514]: 2025-05-16 09:40:28.285 [INFO][3948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--m88r4-eth0 coredns-6f6b679f8f- kube-system 595d9a03-d9ed-42ac-938e-abc42fcc06c0 769 0 2025-05-16 09:39:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-m88r4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48c1ec3eef0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-m88r4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m88r4-" May 16 09:40:28.478605 containerd[1514]: 2025-05-16 09:40:28.285 [INFO][3948] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-m88r4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" May 16 09:40:28.478605 containerd[1514]: 2025-05-16 09:40:28.390 [INFO][3962] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" HandleID="k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Workload="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.404 [INFO][3962] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" HandleID="k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Workload="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035c6a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-m88r4", "timestamp":"2025-05-16 09:40:28.390017273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.404 [INFO][3962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.404 [INFO][3962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.404 [INFO][3962] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.407 [INFO][3962] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" host="localhost" May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.428 [INFO][3962] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.436 [INFO][3962] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.438 [INFO][3962] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.440 [INFO][3962] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:40:28.478860 containerd[1514]: 2025-05-16 09:40:28.441 [INFO][3962] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" host="localhost" May 16 09:40:28.479094 containerd[1514]: 2025-05-16 09:40:28.443 [INFO][3962] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6 May 16 09:40:28.479094 containerd[1514]: 2025-05-16 09:40:28.447 [INFO][3962] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" host="localhost" May 16 09:40:28.479094 containerd[1514]: 2025-05-16 09:40:28.454 [INFO][3962] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" host="localhost" May 16 09:40:28.479094 containerd[1514]: 2025-05-16 09:40:28.454 [INFO][3962] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" host="localhost" May 16 09:40:28.479094 containerd[1514]: 2025-05-16 09:40:28.454 [INFO][3962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:40:28.479094 containerd[1514]: 2025-05-16 09:40:28.454 [INFO][3962] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" HandleID="k8s-pod-network.7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Workload="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" May 16 09:40:28.479246 containerd[1514]: 2025-05-16 09:40:28.457 [INFO][3948] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-m88r4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--m88r4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"595d9a03-d9ed-42ac-938e-abc42fcc06c0", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-m88r4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48c1ec3eef0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:28.479298 containerd[1514]: 2025-05-16 09:40:28.457 [INFO][3948] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-m88r4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" May 16 09:40:28.479298 containerd[1514]: 2025-05-16 09:40:28.457 [INFO][3948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48c1ec3eef0 ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-m88r4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" May 16 09:40:28.479298 containerd[1514]: 2025-05-16 09:40:28.464 [INFO][3948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-m88r4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" May 16 09:40:28.479383 containerd[1514]: 2025-05-16 09:40:28.464 [INFO][3948] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-m88r4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--m88r4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"595d9a03-d9ed-42ac-938e-abc42fcc06c0", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6", Pod:"coredns-6f6b679f8f-m88r4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48c1ec3eef0", MAC:"e6:61:d8:e8:f2:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:28.479383 containerd[1514]: 2025-05-16 09:40:28.474 [INFO][3948] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-m88r4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m88r4-eth0" May 16 09:40:28.562596 containerd[1514]: time="2025-05-16T09:40:28.562490117Z" level=info msg="connecting to shim 7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6" address="unix:///run/containerd/s/d58f31df778c82f46f926c45ce30c55a06f15692d35cffda7a0663aea60c3c3f" namespace=k8s.io protocol=ttrpc version=3 May 16 09:40:28.590714 systemd[1]: Started cri-containerd-7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6.scope - libcontainer container 7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6. May 16 09:40:28.603791 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:40:28.625046 containerd[1514]: time="2025-05-16T09:40:28.625009119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m88r4,Uid:595d9a03-d9ed-42ac-938e-abc42fcc06c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6\"" May 16 09:40:28.631826 containerd[1514]: time="2025-05-16T09:40:28.631773015Z" level=info msg="CreateContainer within sandbox \"7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 09:40:28.640449 containerd[1514]: time="2025-05-16T09:40:28.639875827Z" level=info msg="Container b6d30e752a8bb2608c202b94fac6653ea30467a2eab1b39e93c2f95c4f37173c: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:28.643527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459738264.mount: Deactivated successfully. May 16 09:40:28.645315 containerd[1514]: time="2025-05-16T09:40:28.645280008Z" level=info msg="CreateContainer within sandbox \"7a56c2014097f1e7eb4231934447bcbefc8c79ef38a88eda1cf078bb265662a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6d30e752a8bb2608c202b94fac6653ea30467a2eab1b39e93c2f95c4f37173c\"" May 16 09:40:28.646298 containerd[1514]: time="2025-05-16T09:40:28.646140095Z" level=info msg="StartContainer for \"b6d30e752a8bb2608c202b94fac6653ea30467a2eab1b39e93c2f95c4f37173c\"" May 16 09:40:28.647098 containerd[1514]: time="2025-05-16T09:40:28.647074307Z" level=info msg="connecting to shim b6d30e752a8bb2608c202b94fac6653ea30467a2eab1b39e93c2f95c4f37173c" address="unix:///run/containerd/s/d58f31df778c82f46f926c45ce30c55a06f15692d35cffda7a0663aea60c3c3f" protocol=ttrpc version=3 May 16 09:40:28.669744 systemd[1]: Started cri-containerd-b6d30e752a8bb2608c202b94fac6653ea30467a2eab1b39e93c2f95c4f37173c.scope - libcontainer container b6d30e752a8bb2608c202b94fac6653ea30467a2eab1b39e93c2f95c4f37173c. May 16 09:40:28.707475 containerd[1514]: time="2025-05-16T09:40:28.707298301Z" level=info msg="StartContainer for \"b6d30e752a8bb2608c202b94fac6653ea30467a2eab1b39e93c2f95c4f37173c\" returns successfully" May 16 09:40:29.335537 kubelet[2634]: I0516 09:40:29.335451 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-m88r4" podStartSLOduration=39.335433737 podStartE2EDuration="39.335433737s" podCreationTimestamp="2025-05-16 09:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:40:29.323831671 +0000 UTC m=+46.241780392" watchObservedRunningTime="2025-05-16 09:40:29.335433737 +0000 UTC m=+46.253382418" May 16 09:40:29.394957 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:57652.service - OpenSSH per-connection server daemon (10.0.0.1:57652). May 16 09:40:29.449347 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 57652 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:29.450671 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:29.454531 systemd-logind[1496]: New session 11 of user core. May 16 09:40:29.460740 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 09:40:29.574237 sshd[4074]: Connection closed by 10.0.0.1 port 57652 May 16 09:40:29.574855 sshd-session[4070]: pam_unix(sshd:session): session closed for user core May 16 09:40:29.593007 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:57652.service: Deactivated successfully. May 16 09:40:29.595514 systemd[1]: session-11.scope: Deactivated successfully. May 16 09:40:29.597283 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. May 16 09:40:29.599144 systemd-logind[1496]: Removed session 11. May 16 09:40:29.600457 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:57664.service - OpenSSH per-connection server daemon (10.0.0.1:57664). May 16 09:40:29.631808 systemd-networkd[1429]: cali48c1ec3eef0: Gained IPv6LL May 16 09:40:29.657564 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 57664 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:29.658949 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:29.663695 systemd-logind[1496]: New session 12 of user core. May 16 09:40:29.675791 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 09:40:29.820094 sshd[4093]: Connection closed by 10.0.0.1 port 57664 May 16 09:40:29.820515 sshd-session[4091]: pam_unix(sshd:session): session closed for user core May 16 09:40:29.834797 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:57664.service: Deactivated successfully. May 16 09:40:29.836318 systemd[1]: session-12.scope: Deactivated successfully. May 16 09:40:29.838105 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. May 16 09:40:29.842881 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:57674.service - OpenSSH per-connection server daemon (10.0.0.1:57674). May 16 09:40:29.844186 systemd-logind[1496]: Removed session 12. May 16 09:40:29.902235 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 57674 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:29.903741 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:29.908657 systemd-logind[1496]: New session 13 of user core. May 16 09:40:29.923737 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 09:40:30.034866 sshd[4106]: Connection closed by 10.0.0.1 port 57674 May 16 09:40:30.035343 sshd-session[4104]: pam_unix(sshd:session): session closed for user core May 16 09:40:30.038985 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:57674.service: Deactivated successfully. May 16 09:40:30.041004 systemd[1]: session-13.scope: Deactivated successfully. May 16 09:40:30.041671 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. May 16 09:40:30.043114 systemd-logind[1496]: Removed session 13. May 16 09:40:30.173940 containerd[1514]: time="2025-05-16T09:40:30.173797991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d5fc7b7-4p8nh,Uid:c67ce86b-05fd-4101-9eb9-8a3abdef04eb,Namespace:calico-system,Attempt:0,}" May 16 09:40:30.294457 systemd-networkd[1429]: cali2fc4d5e376a: Link UP May 16 09:40:30.295784 systemd-networkd[1429]: cali2fc4d5e376a: Gained carrier May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.215 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0 calico-kube-controllers-757d5fc7b7- calico-system c67ce86b-05fd-4101-9eb9-8a3abdef04eb 763 0 2025-05-16 09:39:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:757d5fc7b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-757d5fc7b7-4p8nh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2fc4d5e376a [] []}} ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Namespace="calico-system" Pod="calico-kube-controllers-757d5fc7b7-4p8nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.215 [INFO][4125] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Namespace="calico-system" Pod="calico-kube-controllers-757d5fc7b7-4p8nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.246 [INFO][4141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" HandleID="k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Workload="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.256 [INFO][4141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" HandleID="k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Workload="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f5890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-757d5fc7b7-4p8nh", "timestamp":"2025-05-16 09:40:30.246281139 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.256 [INFO][4141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.256 [INFO][4141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.256 [INFO][4141] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.258 [INFO][4141] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.262 [INFO][4141] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.266 [INFO][4141] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.268 [INFO][4141] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.270 [INFO][4141] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.270 [INFO][4141] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.271 [INFO][4141] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.275 [INFO][4141] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.281 [INFO][4141] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.281 [INFO][4141] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" host="localhost" May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.281 [INFO][4141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:40:30.319903 containerd[1514]: 2025-05-16 09:40:30.281 [INFO][4141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" HandleID="k8s-pod-network.d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Workload="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" May 16 09:40:30.320476 containerd[1514]: 2025-05-16 09:40:30.286 [INFO][4125] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Namespace="calico-system" Pod="calico-kube-controllers-757d5fc7b7-4p8nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0", GenerateName:"calico-kube-controllers-757d5fc7b7-", Namespace:"calico-system", SelfLink:"", UID:"c67ce86b-05fd-4101-9eb9-8a3abdef04eb", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757d5fc7b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-757d5fc7b7-4p8nh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2fc4d5e376a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:30.320476 containerd[1514]: 2025-05-16 09:40:30.287 [INFO][4125] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Namespace="calico-system" Pod="calico-kube-controllers-757d5fc7b7-4p8nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" May 16 09:40:30.320476 containerd[1514]: 2025-05-16 09:40:30.287 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fc4d5e376a ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Namespace="calico-system" Pod="calico-kube-controllers-757d5fc7b7-4p8nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" May 16 09:40:30.320476 containerd[1514]: 2025-05-16 09:40:30.297 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Namespace="calico-system" Pod="calico-kube-controllers-757d5fc7b7-4p8nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" May 16 09:40:30.320476 containerd[1514]: 2025-05-16 09:40:30.297 [INFO][4125] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Namespace="calico-system" Pod="calico-kube-controllers-757d5fc7b7-4p8nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0", GenerateName:"calico-kube-controllers-757d5fc7b7-", Namespace:"calico-system", SelfLink:"", UID:"c67ce86b-05fd-4101-9eb9-8a3abdef04eb", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757d5fc7b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de", Pod:"calico-kube-controllers-757d5fc7b7-4p8nh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2fc4d5e376a", MAC:"be:b2:cf:66:15:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:30.320476 containerd[1514]: 2025-05-16 09:40:30.312 [INFO][4125] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" Namespace="calico-system" Pod="calico-kube-controllers-757d5fc7b7-4p8nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d5fc7b7--4p8nh-eth0" May 16 09:40:30.349538 containerd[1514]: time="2025-05-16T09:40:30.349051389Z" level=info msg="connecting to shim d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de" address="unix:///run/containerd/s/443fd76a81de29d7454c1e0fbda92b38e846019c92842c184a479588a913b582" namespace=k8s.io protocol=ttrpc version=3 May 16 09:40:30.382731 systemd[1]: Started cri-containerd-d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de.scope - libcontainer container d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de. May 16 09:40:30.393074 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:40:30.412945 containerd[1514]: time="2025-05-16T09:40:30.412857644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d5fc7b7-4p8nh,Uid:c67ce86b-05fd-4101-9eb9-8a3abdef04eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de\"" May 16 09:40:30.415750 containerd[1514]: time="2025-05-16T09:40:30.415719914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 16 09:40:31.174604 containerd[1514]: time="2025-05-16T09:40:31.174444242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7478f7b79b-7sqsw,Uid:9346848e-d416-4e64-93cc-dfc68fe9bf99,Namespace:calico-apiserver,Attempt:0,}" May 16 09:40:31.175554 containerd[1514]: time="2025-05-16T09:40:31.175235522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x7nd6,Uid:047fb491-99ce-45e8-a0a7-c50febff27c0,Namespace:kube-system,Attempt:0,}" May 16 09:40:31.293899 systemd-networkd[1429]: cali069e120aba3: Link UP May 16 09:40:31.294077 systemd-networkd[1429]: cali069e120aba3: Gained carrier May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.218 [INFO][4210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0 calico-apiserver-7478f7b79b- calico-apiserver 9346848e-d416-4e64-93cc-dfc68fe9bf99 772 0 2025-05-16 09:39:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7478f7b79b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7478f7b79b-7sqsw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali069e120aba3 [] []}} ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-7sqsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.218 [INFO][4210] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-7sqsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.247 [INFO][4238] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" HandleID="k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Workload="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.263 [INFO][4238] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" HandleID="k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Workload="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7478f7b79b-7sqsw", "timestamp":"2025-05-16 09:40:31.247831998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.264 [INFO][4238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.264 [INFO][4238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.264 [INFO][4238] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.266 [INFO][4238] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.269 [INFO][4238] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.274 [INFO][4238] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.276 [INFO][4238] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.278 [INFO][4238] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.278 [INFO][4238] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.280 [INFO][4238] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.284 [INFO][4238] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.289 [INFO][4238] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.289 [INFO][4238] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" host="localhost" May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.289 [INFO][4238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:40:31.306032 containerd[1514]: 2025-05-16 09:40:31.289 [INFO][4238] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" HandleID="k8s-pod-network.bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Workload="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" May 16 09:40:31.306709 containerd[1514]: 2025-05-16 09:40:31.291 [INFO][4210] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-7sqsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0", GenerateName:"calico-apiserver-7478f7b79b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9346848e-d416-4e64-93cc-dfc68fe9bf99", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7478f7b79b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7478f7b79b-7sqsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali069e120aba3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:31.306709 containerd[1514]: 2025-05-16 09:40:31.291 [INFO][4210] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-7sqsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" May 16 09:40:31.306709 containerd[1514]: 2025-05-16 09:40:31.291 [INFO][4210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali069e120aba3 ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-7sqsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" May 16 09:40:31.306709 containerd[1514]: 2025-05-16 09:40:31.294 [INFO][4210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-7sqsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" May 16 09:40:31.306709 containerd[1514]: 2025-05-16 09:40:31.294 [INFO][4210] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-7sqsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0", GenerateName:"calico-apiserver-7478f7b79b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9346848e-d416-4e64-93cc-dfc68fe9bf99", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7478f7b79b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee", Pod:"calico-apiserver-7478f7b79b-7sqsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali069e120aba3", MAC:"1a:dc:9c:87:0a:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:31.306709 containerd[1514]: 2025-05-16 09:40:31.303 [INFO][4210] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-7sqsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--7sqsw-eth0" May 16 09:40:31.338484 containerd[1514]: time="2025-05-16T09:40:31.338446065Z" level=info msg="connecting to shim bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee" address="unix:///run/containerd/s/62caa1989839f06dc31040aff617ca7d340495bc0a2d53083419312e2a4ff9b6" namespace=k8s.io protocol=ttrpc version=3 May 16 09:40:31.366875 systemd[1]: Started cri-containerd-bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee.scope - libcontainer container bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee. May 16 09:40:31.385035 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:40:31.401819 systemd-networkd[1429]: cali999699f709f: Link UP May 16 09:40:31.402140 systemd-networkd[1429]: cali999699f709f: Gained carrier May 16 09:40:31.413770 containerd[1514]: time="2025-05-16T09:40:31.413724876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7478f7b79b-7sqsw,Uid:9346848e-d416-4e64-93cc-dfc68fe9bf99,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee\"" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.218 [INFO][4207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0 coredns-6f6b679f8f- kube-system 047fb491-99ce-45e8-a0a7-c50febff27c0 770 0 2025-05-16 09:39:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-x7nd6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali999699f709f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Namespace="kube-system" Pod="coredns-6f6b679f8f-x7nd6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x7nd6-" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.218 [INFO][4207] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Namespace="kube-system" Pod="coredns-6f6b679f8f-x7nd6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.252 [INFO][4245] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" HandleID="k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Workload="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.264 [INFO][4245] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" HandleID="k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Workload="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027ac60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-x7nd6", "timestamp":"2025-05-16 09:40:31.252271382 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.264 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.289 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.289 [INFO][4245] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.368 [INFO][4245] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.374 [INFO][4245] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.378 [INFO][4245] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.380 [INFO][4245] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.383 [INFO][4245] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.383 [INFO][4245] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.385 [INFO][4245] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.389 [INFO][4245] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.394 [INFO][4245] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.394 [INFO][4245] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" host="localhost" May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.394 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:40:31.417312 containerd[1514]: 2025-05-16 09:40:31.394 [INFO][4245] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" HandleID="k8s-pod-network.128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Workload="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" May 16 09:40:31.417929 containerd[1514]: 2025-05-16 09:40:31.400 [INFO][4207] cni-plugin/k8s.go 386: Populated endpoint ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Namespace="kube-system" Pod="coredns-6f6b679f8f-x7nd6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"047fb491-99ce-45e8-a0a7-c50febff27c0", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-x7nd6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali999699f709f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:31.417929 containerd[1514]: 2025-05-16 09:40:31.400 [INFO][4207] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Namespace="kube-system" Pod="coredns-6f6b679f8f-x7nd6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" May 16 09:40:31.417929 containerd[1514]: 2025-05-16 09:40:31.400 [INFO][4207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali999699f709f ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Namespace="kube-system" Pod="coredns-6f6b679f8f-x7nd6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" May 16 09:40:31.417929 containerd[1514]: 2025-05-16 09:40:31.402 [INFO][4207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Namespace="kube-system" Pod="coredns-6f6b679f8f-x7nd6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" May 16 09:40:31.417929 containerd[1514]: 2025-05-16 09:40:31.403 [INFO][4207] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Namespace="kube-system" Pod="coredns-6f6b679f8f-x7nd6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"047fb491-99ce-45e8-a0a7-c50febff27c0", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba", Pod:"coredns-6f6b679f8f-x7nd6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali999699f709f", MAC:"ba:e6:ef:fb:17:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:31.417929 containerd[1514]: 2025-05-16 09:40:31.413 [INFO][4207] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" Namespace="kube-system" Pod="coredns-6f6b679f8f-x7nd6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x7nd6-eth0" May 16 09:40:31.441185 containerd[1514]: time="2025-05-16T09:40:31.441099182Z" level=info msg="connecting to shim 128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba" address="unix:///run/containerd/s/2e2d27b99157a5e45282088b9299fd5a6346950b5f7b4a6fe12700d8b855b82a" namespace=k8s.io protocol=ttrpc version=3 May 16 09:40:31.466760 systemd[1]: Started cri-containerd-128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba.scope - libcontainer container 128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba. May 16 09:40:31.478929 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:40:31.499753 containerd[1514]: time="2025-05-16T09:40:31.499702989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x7nd6,Uid:047fb491-99ce-45e8-a0a7-c50febff27c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba\"" May 16 09:40:31.502252 containerd[1514]: time="2025-05-16T09:40:31.502224317Z" level=info msg="CreateContainer within sandbox \"128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 09:40:31.509416 containerd[1514]: time="2025-05-16T09:40:31.509377079Z" level=info msg="Container 5e31bc99bb7f93cdd3d29a4e067490da59731966762fa83daf3967c0b2045bdf: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:31.516371 containerd[1514]: time="2025-05-16T09:40:31.516318990Z" level=info msg="CreateContainer within sandbox \"128de3419d237329f2d640d8cfeeea47f07afe45730a757fb219d429fe29dcba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e31bc99bb7f93cdd3d29a4e067490da59731966762fa83daf3967c0b2045bdf\"" May 16 09:40:31.516891 containerd[1514]: time="2025-05-16T09:40:31.516845897Z" level=info msg="StartContainer for \"5e31bc99bb7f93cdd3d29a4e067490da59731966762fa83daf3967c0b2045bdf\"" May 16 09:40:31.517748 containerd[1514]: time="2025-05-16T09:40:31.517607335Z" level=info msg="connecting to shim 5e31bc99bb7f93cdd3d29a4e067490da59731966762fa83daf3967c0b2045bdf" address="unix:///run/containerd/s/2e2d27b99157a5e45282088b9299fd5a6346950b5f7b4a6fe12700d8b855b82a" protocol=ttrpc version=3 May 16 09:40:31.549060 systemd-networkd[1429]: cali2fc4d5e376a: Gained IPv6LL May 16 09:40:31.550744 systemd[1]: Started cri-containerd-5e31bc99bb7f93cdd3d29a4e067490da59731966762fa83daf3967c0b2045bdf.scope - libcontainer container 5e31bc99bb7f93cdd3d29a4e067490da59731966762fa83daf3967c0b2045bdf. May 16 09:40:31.578632 containerd[1514]: time="2025-05-16T09:40:31.578591223Z" level=info msg="StartContainer for \"5e31bc99bb7f93cdd3d29a4e067490da59731966762fa83daf3967c0b2045bdf\" returns successfully" May 16 09:40:32.173645 containerd[1514]: time="2025-05-16T09:40:32.173371461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7478f7b79b-mpbwm,Uid:1ff10a03-7281-4b2c-957e-df9b401ce60e,Namespace:calico-apiserver,Attempt:0,}" May 16 09:40:32.173934 containerd[1514]: time="2025-05-16T09:40:32.173910608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zfjpx,Uid:3e454e8c-f06e-47e5-848c-b0b3db2ddb78,Namespace:calico-system,Attempt:0,}" May 16 09:40:32.279876 systemd-networkd[1429]: cali6e7a80a46b8: Link UP May 16 09:40:32.280636 systemd-networkd[1429]: cali6e7a80a46b8: Gained carrier May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.212 [INFO][4413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zfjpx-eth0 csi-node-driver- calico-system 3e454e8c-f06e-47e5-848c-b0b3db2ddb78 622 0 2025-05-16 09:39:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zfjpx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6e7a80a46b8 [] []}} ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Namespace="calico-system" Pod="csi-node-driver-zfjpx" WorkloadEndpoint="localhost-k8s-csi--node--driver--zfjpx-" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.212 [INFO][4413] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Namespace="calico-system" Pod="csi-node-driver-zfjpx" WorkloadEndpoint="localhost-k8s-csi--node--driver--zfjpx-eth0" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.239 [INFO][4441] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" HandleID="k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Workload="localhost-k8s-csi--node--driver--zfjpx-eth0" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.251 [INFO][4441] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" HandleID="k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Workload="localhost-k8s-csi--node--driver--zfjpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012bcf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zfjpx", "timestamp":"2025-05-16 09:40:32.239365738 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.251 [INFO][4441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.251 [INFO][4441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.251 [INFO][4441] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.253 [INFO][4441] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.256 [INFO][4441] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.261 [INFO][4441] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.262 [INFO][4441] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.264 [INFO][4441] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.264 [INFO][4441] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.266 [INFO][4441] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.269 [INFO][4441] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.274 [INFO][4441] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.274 [INFO][4441] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" host="localhost" May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.274 [INFO][4441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:40:32.292946 containerd[1514]: 2025-05-16 09:40:32.275 [INFO][4441] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" HandleID="k8s-pod-network.9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Workload="localhost-k8s-csi--node--driver--zfjpx-eth0" May 16 09:40:32.293648 containerd[1514]: 2025-05-16 09:40:32.277 [INFO][4413] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Namespace="calico-system" Pod="csi-node-driver-zfjpx" WorkloadEndpoint="localhost-k8s-csi--node--driver--zfjpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zfjpx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e454e8c-f06e-47e5-848c-b0b3db2ddb78", ResourceVersion:"622", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zfjpx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e7a80a46b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:32.293648 containerd[1514]: 2025-05-16 09:40:32.277 [INFO][4413] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Namespace="calico-system" Pod="csi-node-driver-zfjpx" WorkloadEndpoint="localhost-k8s-csi--node--driver--zfjpx-eth0" May 16 09:40:32.293648 containerd[1514]: 2025-05-16 09:40:32.277 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e7a80a46b8 ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Namespace="calico-system" Pod="csi-node-driver-zfjpx" WorkloadEndpoint="localhost-k8s-csi--node--driver--zfjpx-eth0" May 16 09:40:32.293648 containerd[1514]: 2025-05-16 09:40:32.279 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Namespace="calico-system" Pod="csi-node-driver-zfjpx" WorkloadEndpoint="localhost-k8s-csi--node--driver--zfjpx-eth0" May 16 09:40:32.293648 containerd[1514]: 2025-05-16 09:40:32.280 [INFO][4413] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Namespace="calico-system" Pod="csi-node-driver-zfjpx" WorkloadEndpoint="localhost-k8s-csi--node--driver--zfjpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zfjpx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e454e8c-f06e-47e5-848c-b0b3db2ddb78", ResourceVersion:"622", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb", Pod:"csi-node-driver-zfjpx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e7a80a46b8", MAC:"42:b1:95:a8:b4:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:32.293648 containerd[1514]: 2025-05-16 09:40:32.290 [INFO][4413] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" Namespace="calico-system" Pod="csi-node-driver-zfjpx" WorkloadEndpoint="localhost-k8s-csi--node--driver--zfjpx-eth0" May 16 09:40:32.315350 containerd[1514]: time="2025-05-16T09:40:32.315307983Z" level=info msg="connecting to shim 9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb" address="unix:///run/containerd/s/9695cfdc733b31f70f18aaf8b13e2db6608dfcbaf6f631b4f9d4f5d3336378ae" namespace=k8s.io protocol=ttrpc version=3 May 16 09:40:32.334743 systemd[1]: Started cri-containerd-9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb.scope - libcontainer container 9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb. May 16 09:40:32.355484 kubelet[2634]: I0516 09:40:32.352351 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-x7nd6" podStartSLOduration=42.352334959 podStartE2EDuration="42.352334959s" podCreationTimestamp="2025-05-16 09:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:40:32.338375034 +0000 UTC m=+49.256323715" watchObservedRunningTime="2025-05-16 09:40:32.352334959 +0000 UTC m=+49.270283640" May 16 09:40:32.366309 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:40:32.386793 containerd[1514]: time="2025-05-16T09:40:32.386758007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zfjpx,Uid:3e454e8c-f06e-47e5-848c-b0b3db2ddb78,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb\"" May 16 09:40:32.398684 systemd-networkd[1429]: cali79c0a6105b5: Link UP May 16 09:40:32.399575 systemd-networkd[1429]: cali79c0a6105b5: Gained carrier May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.211 [INFO][4407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0 calico-apiserver-7478f7b79b- calico-apiserver 1ff10a03-7281-4b2c-957e-df9b401ce60e 767 0 2025-05-16 09:39:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7478f7b79b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7478f7b79b-mpbwm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79c0a6105b5 [] []}} ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-mpbwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.212 [INFO][4407] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-mpbwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.240 [INFO][4435] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" HandleID="k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Workload="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.251 [INFO][4435] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" HandleID="k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Workload="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7478f7b79b-mpbwm", "timestamp":"2025-05-16 09:40:32.240346226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.251 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.275 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.275 [INFO][4435] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.358 [INFO][4435] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.366 [INFO][4435] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.373 [INFO][4435] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.375 [INFO][4435] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.378 [INFO][4435] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.378 [INFO][4435] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.379 [INFO][4435] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.387 [INFO][4435] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.393 [INFO][4435] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.393 [INFO][4435] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" host="localhost" May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.393 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:40:32.413143 containerd[1514]: 2025-05-16 09:40:32.394 [INFO][4435] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" HandleID="k8s-pod-network.d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Workload="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" May 16 09:40:32.413689 containerd[1514]: 2025-05-16 09:40:32.396 [INFO][4407] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-mpbwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0", GenerateName:"calico-apiserver-7478f7b79b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ff10a03-7281-4b2c-957e-df9b401ce60e", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7478f7b79b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7478f7b79b-mpbwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79c0a6105b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:32.413689 containerd[1514]: 2025-05-16 09:40:32.396 [INFO][4407] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-mpbwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" May 16 09:40:32.413689 containerd[1514]: 2025-05-16 09:40:32.396 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79c0a6105b5 ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-mpbwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" May 16 09:40:32.413689 containerd[1514]: 2025-05-16 09:40:32.400 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-mpbwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" May 16 09:40:32.413689 containerd[1514]: 2025-05-16 09:40:32.401 [INFO][4407] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-mpbwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0", GenerateName:"calico-apiserver-7478f7b79b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ff10a03-7281-4b2c-957e-df9b401ce60e", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7478f7b79b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd", Pod:"calico-apiserver-7478f7b79b-mpbwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79c0a6105b5", MAC:"a2:41:33:59:74:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:40:32.413689 containerd[1514]: 2025-05-16 09:40:32.409 [INFO][4407] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" Namespace="calico-apiserver" Pod="calico-apiserver-7478f7b79b-mpbwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7478f7b79b--mpbwm-eth0" May 16 09:40:32.433274 containerd[1514]: time="2025-05-16T09:40:32.433225006Z" level=info msg="connecting to shim d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd" address="unix:///run/containerd/s/0a1896e959d9c74cb940d82d1362c586a0db3dda70c310cafe55b927df242dff" namespace=k8s.io protocol=ttrpc version=3 May 16 09:40:32.458730 systemd[1]: Started cri-containerd-d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd.scope - libcontainer container d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd. May 16 09:40:32.468815 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:40:32.487142 containerd[1514]: time="2025-05-16T09:40:32.487107049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7478f7b79b-mpbwm,Uid:1ff10a03-7281-4b2c-957e-df9b401ce60e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd\"" May 16 09:40:32.508836 systemd-networkd[1429]: cali999699f709f: Gained IPv6LL May 16 09:40:33.084880 systemd-networkd[1429]: cali069e120aba3: Gained IPv6LL May 16 09:40:33.468732 systemd-networkd[1429]: cali79c0a6105b5: Gained IPv6LL May 16 09:40:33.596729 systemd-networkd[1429]: cali6e7a80a46b8: Gained IPv6LL May 16 09:40:35.054213 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:38226.service - OpenSSH per-connection server daemon (10.0.0.1:38226). May 16 09:40:35.112212 sshd[4589]: Accepted publickey for core from 10.0.0.1 port 38226 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:35.113704 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:35.120830 systemd-logind[1496]: New session 14 of user core. May 16 09:40:35.129809 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 09:40:35.244335 sshd[4591]: Connection closed by 10.0.0.1 port 38226 May 16 09:40:35.244688 sshd-session[4589]: pam_unix(sshd:session): session closed for user core May 16 09:40:35.248967 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:38226.service: Deactivated successfully. May 16 09:40:35.250842 systemd[1]: session-14.scope: Deactivated successfully. May 16 09:40:35.253127 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. May 16 09:40:35.254294 systemd-logind[1496]: Removed session 14. May 16 09:40:39.245351 containerd[1514]: time="2025-05-16T09:40:39.245123835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa\" id:\"e10fb11d364366fea18c691540e7b4a861025642ead4b2af9a5d9d99a02bb990\" pid:4617 exited_at:{seconds:1747388439 nanos:244795942}" May 16 09:40:40.258193 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:38242.service - OpenSSH per-connection server daemon (10.0.0.1:38242). May 16 09:40:40.321164 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 38242 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:40.323101 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:40.329049 systemd-logind[1496]: New session 15 of user core. May 16 09:40:40.334874 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 09:40:40.492119 sshd[4636]: Connection closed by 10.0.0.1 port 38242 May 16 09:40:40.492507 sshd-session[4634]: pam_unix(sshd:session): session closed for user core May 16 09:40:40.496700 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:38242.service: Deactivated successfully. May 16 09:40:40.499324 systemd[1]: session-15.scope: Deactivated successfully. May 16 09:40:40.503481 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. May 16 09:40:40.505322 systemd-logind[1496]: Removed session 15. May 16 09:40:40.529161 containerd[1514]: time="2025-05-16T09:40:40.529064530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:40.530054 containerd[1514]: time="2025-05-16T09:40:40.530014686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 16 09:40:40.531211 containerd[1514]: time="2025-05-16T09:40:40.531172010Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:40.532903 containerd[1514]: time="2025-05-16T09:40:40.532877795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:40.533771 containerd[1514]: time="2025-05-16T09:40:40.533739428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 10.117987073s" May 16 09:40:40.533810 containerd[1514]: time="2025-05-16T09:40:40.533773749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 16 09:40:40.535728 containerd[1514]: time="2025-05-16T09:40:40.535706463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 16 09:40:40.545594 containerd[1514]: time="2025-05-16T09:40:40.544879572Z" level=info msg="CreateContainer within sandbox \"d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 09:40:40.552628 containerd[1514]: time="2025-05-16T09:40:40.552595745Z" level=info msg="Container ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:40.555160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673254104.mount: Deactivated successfully. May 16 09:40:40.560423 containerd[1514]: time="2025-05-16T09:40:40.560392522Z" level=info msg="CreateContainer within sandbox \"d07b2643c06b189febdb9b264df1d9e41776bcc94e9fdecb1e0d83652fa588de\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2\"" May 16 09:40:40.560824 containerd[1514]: time="2025-05-16T09:40:40.560800618Z" level=info msg="StartContainer for \"ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2\"" May 16 09:40:40.563563 containerd[1514]: time="2025-05-16T09:40:40.563522921Z" level=info msg="connecting to shim ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2" address="unix:///run/containerd/s/443fd76a81de29d7454c1e0fbda92b38e846019c92842c184a479588a913b582" protocol=ttrpc version=3 May 16 09:40:40.587746 systemd[1]: Started cri-containerd-ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2.scope - libcontainer container ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2. May 16 09:40:40.637097 containerd[1514]: time="2025-05-16T09:40:40.637062359Z" level=info msg="StartContainer for \"ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2\" returns successfully" May 16 09:40:41.365157 kubelet[2634]: I0516 09:40:41.365075 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-757d5fc7b7-4p8nh" podStartSLOduration=34.244592152 podStartE2EDuration="44.365048222s" podCreationTimestamp="2025-05-16 09:39:57 +0000 UTC" firstStartedPulling="2025-05-16 09:40:30.414139911 +0000 UTC m=+47.332088552" lastFinishedPulling="2025-05-16 09:40:40.534595941 +0000 UTC m=+57.452544622" observedRunningTime="2025-05-16 09:40:41.362714856 +0000 UTC m=+58.280663537" watchObservedRunningTime="2025-05-16 09:40:41.365048222 +0000 UTC m=+58.282996903" May 16 09:40:41.384797 containerd[1514]: time="2025-05-16T09:40:41.384737987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2\" id:\"2000c727e4eeab8fb82161b432796671d5fa951170093ce289aff4d0a3542803\" pid:4695 exited_at:{seconds:1747388441 nanos:384345413}" May 16 09:40:45.507987 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:57472.service - OpenSSH per-connection server daemon (10.0.0.1:57472). May 16 09:40:45.561545 sshd[4719]: Accepted publickey for core from 10.0.0.1 port 57472 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:45.562826 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:45.566866 systemd-logind[1496]: New session 16 of user core. May 16 09:40:45.577735 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 09:40:45.708059 sshd[4721]: Connection closed by 10.0.0.1 port 57472 May 16 09:40:45.708368 sshd-session[4719]: pam_unix(sshd:session): session closed for user core May 16 09:40:45.711729 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:57472.service: Deactivated successfully. May 16 09:40:45.714576 systemd[1]: session-16.scope: Deactivated successfully. May 16 09:40:45.715314 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. May 16 09:40:45.716334 systemd-logind[1496]: Removed session 16. May 16 09:40:46.694227 containerd[1514]: time="2025-05-16T09:40:46.694181320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2\" id:\"2519054633efafd41f96249af198f7e00dc6070056704639d6491246883181eb\" pid:4747 exited_at:{seconds:1747388446 nanos:693958233}" May 16 09:40:47.280085 containerd[1514]: time="2025-05-16T09:40:47.280033828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:47.281567 containerd[1514]: time="2025-05-16T09:40:47.281533474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 16 09:40:47.282207 containerd[1514]: time="2025-05-16T09:40:47.282176933Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:47.285017 containerd[1514]: time="2025-05-16T09:40:47.284369360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:47.285017 containerd[1514]: time="2025-05-16T09:40:47.284898856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 6.749162032s" May 16 09:40:47.285017 containerd[1514]: time="2025-05-16T09:40:47.284935057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 16 09:40:47.287668 containerd[1514]: time="2025-05-16T09:40:47.287645940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 16 09:40:47.290619 containerd[1514]: time="2025-05-16T09:40:47.290324942Z" level=info msg="CreateContainer within sandbox \"bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 09:40:47.298615 containerd[1514]: time="2025-05-16T09:40:47.297848291Z" level=info msg="Container f1d6a5793244b022597d222c033944b447335097d6047b193eebc8a732bc77ad: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:47.304597 containerd[1514]: time="2025-05-16T09:40:47.304547455Z" level=info msg="CreateContainer within sandbox \"bc6f7ef3a694fac82a9d19f41a34ba9636aacbd7e5977a98b4bc9b1fa07224ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1d6a5793244b022597d222c033944b447335097d6047b193eebc8a732bc77ad\"" May 16 09:40:47.305036 containerd[1514]: time="2025-05-16T09:40:47.304967508Z" level=info msg="StartContainer for \"f1d6a5793244b022597d222c033944b447335097d6047b193eebc8a732bc77ad\"" May 16 09:40:47.306617 containerd[1514]: time="2025-05-16T09:40:47.306410112Z" level=info msg="connecting to shim f1d6a5793244b022597d222c033944b447335097d6047b193eebc8a732bc77ad" address="unix:///run/containerd/s/62caa1989839f06dc31040aff617ca7d340495bc0a2d53083419312e2a4ff9b6" protocol=ttrpc version=3 May 16 09:40:47.328764 systemd[1]: Started cri-containerd-f1d6a5793244b022597d222c033944b447335097d6047b193eebc8a732bc77ad.scope - libcontainer container f1d6a5793244b022597d222c033944b447335097d6047b193eebc8a732bc77ad. May 16 09:40:47.432999 containerd[1514]: time="2025-05-16T09:40:47.432630397Z" level=info msg="StartContainer for \"f1d6a5793244b022597d222c033944b447335097d6047b193eebc8a732bc77ad\" returns successfully" May 16 09:40:48.467400 kubelet[2634]: I0516 09:40:48.467338 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7478f7b79b-7sqsw" podStartSLOduration=35.595420706 podStartE2EDuration="51.467321272s" podCreationTimestamp="2025-05-16 09:39:57 +0000 UTC" firstStartedPulling="2025-05-16 09:40:31.415514927 +0000 UTC m=+48.333463608" lastFinishedPulling="2025-05-16 09:40:47.287415493 +0000 UTC m=+64.205364174" observedRunningTime="2025-05-16 09:40:48.467055144 +0000 UTC m=+65.385003825" watchObservedRunningTime="2025-05-16 09:40:48.467321272 +0000 UTC m=+65.385269953" May 16 09:40:50.725874 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:57474.service - OpenSSH per-connection server daemon (10.0.0.1:57474). May 16 09:40:50.788875 sshd[4805]: Accepted publickey for core from 10.0.0.1 port 57474 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:50.790202 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:50.794998 systemd-logind[1496]: New session 17 of user core. May 16 09:40:50.800758 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 09:40:50.947365 sshd[4807]: Connection closed by 10.0.0.1 port 57474 May 16 09:40:50.947700 sshd-session[4805]: pam_unix(sshd:session): session closed for user core May 16 09:40:50.950954 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:57474.service: Deactivated successfully. May 16 09:40:50.953129 systemd[1]: session-17.scope: Deactivated successfully. May 16 09:40:50.955132 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. May 16 09:40:50.956199 systemd-logind[1496]: Removed session 17. May 16 09:40:52.385179 containerd[1514]: time="2025-05-16T09:40:52.385129042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:52.386037 containerd[1514]: time="2025-05-16T09:40:52.385787700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 16 09:40:52.386824 containerd[1514]: time="2025-05-16T09:40:52.386783885Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:52.388649 containerd[1514]: time="2025-05-16T09:40:52.388621333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:52.389602 containerd[1514]: time="2025-05-16T09:40:52.389531277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 5.101856216s" May 16 09:40:52.389602 containerd[1514]: time="2025-05-16T09:40:52.389559118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 16 09:40:52.392809 containerd[1514]: time="2025-05-16T09:40:52.392782081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 16 09:40:52.406688 containerd[1514]: time="2025-05-16T09:40:52.406650682Z" level=info msg="CreateContainer within sandbox \"9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 09:40:52.417604 containerd[1514]: time="2025-05-16T09:40:52.417181436Z" level=info msg="Container ccf0c90c85ff581bbef5096e620ee40b762da0fae9a472915d1d94c0c7d05ebd: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:52.424326 containerd[1514]: time="2025-05-16T09:40:52.424288940Z" level=info msg="CreateContainer within sandbox \"9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ccf0c90c85ff581bbef5096e620ee40b762da0fae9a472915d1d94c0c7d05ebd\"" May 16 09:40:52.425805 containerd[1514]: time="2025-05-16T09:40:52.425757938Z" level=info msg="StartContainer for \"ccf0c90c85ff581bbef5096e620ee40b762da0fae9a472915d1d94c0c7d05ebd\"" May 16 09:40:52.427337 containerd[1514]: time="2025-05-16T09:40:52.427301619Z" level=info msg="connecting to shim ccf0c90c85ff581bbef5096e620ee40b762da0fae9a472915d1d94c0c7d05ebd" address="unix:///run/containerd/s/9695cfdc733b31f70f18aaf8b13e2db6608dfcbaf6f631b4f9d4f5d3336378ae" protocol=ttrpc version=3 May 16 09:40:52.449780 systemd[1]: Started cri-containerd-ccf0c90c85ff581bbef5096e620ee40b762da0fae9a472915d1d94c0c7d05ebd.scope - libcontainer container ccf0c90c85ff581bbef5096e620ee40b762da0fae9a472915d1d94c0c7d05ebd. May 16 09:40:52.482732 containerd[1514]: time="2025-05-16T09:40:52.482670898Z" level=info msg="StartContainer for \"ccf0c90c85ff581bbef5096e620ee40b762da0fae9a472915d1d94c0c7d05ebd\" returns successfully" May 16 09:40:53.359423 containerd[1514]: time="2025-05-16T09:40:53.359364113Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:53.360126 containerd[1514]: time="2025-05-16T09:40:53.360092411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 16 09:40:53.361719 containerd[1514]: time="2025-05-16T09:40:53.361684612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 968.870769ms" May 16 09:40:53.361719 containerd[1514]: time="2025-05-16T09:40:53.361719492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 16 09:40:53.363255 containerd[1514]: time="2025-05-16T09:40:53.363085727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 16 09:40:53.364063 containerd[1514]: time="2025-05-16T09:40:53.364037391Z" level=info msg="CreateContainer within sandbox \"d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 09:40:53.371687 containerd[1514]: time="2025-05-16T09:40:53.371656663Z" level=info msg="Container 132be3a580befe5516bdcb16716946165196aaa0667431418231c10a9d89a27e: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:53.382810 containerd[1514]: time="2025-05-16T09:40:53.382774743Z" level=info msg="CreateContainer within sandbox \"d1a05490ea405670bf80a3c4eb8e7b9ed256d30ef863686a182941f85c351edd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"132be3a580befe5516bdcb16716946165196aaa0667431418231c10a9d89a27e\"" May 16 09:40:53.384804 containerd[1514]: time="2025-05-16T09:40:53.383291836Z" level=info msg="StartContainer for \"132be3a580befe5516bdcb16716946165196aaa0667431418231c10a9d89a27e\"" May 16 09:40:53.384804 containerd[1514]: time="2025-05-16T09:40:53.384309621Z" level=info msg="connecting to shim 132be3a580befe5516bdcb16716946165196aaa0667431418231c10a9d89a27e" address="unix:///run/containerd/s/0a1896e959d9c74cb940d82d1362c586a0db3dda70c310cafe55b927df242dff" protocol=ttrpc version=3 May 16 09:40:53.412836 systemd[1]: Started cri-containerd-132be3a580befe5516bdcb16716946165196aaa0667431418231c10a9d89a27e.scope - libcontainer container 132be3a580befe5516bdcb16716946165196aaa0667431418231c10a9d89a27e. May 16 09:40:53.468759 containerd[1514]: time="2025-05-16T09:40:53.468716027Z" level=info msg="StartContainer for \"132be3a580befe5516bdcb16716946165196aaa0667431418231c10a9d89a27e\" returns successfully" May 16 09:40:54.532521 kubelet[2634]: I0516 09:40:54.532460 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7478f7b79b-mpbwm" podStartSLOduration=36.658103786 podStartE2EDuration="57.532443032s" podCreationTimestamp="2025-05-16 09:39:57 +0000 UTC" firstStartedPulling="2025-05-16 09:40:32.488393752 +0000 UTC m=+49.406342393" lastFinishedPulling="2025-05-16 09:40:53.362732958 +0000 UTC m=+70.280681639" observedRunningTime="2025-05-16 09:40:53.495095611 +0000 UTC m=+70.413044292" watchObservedRunningTime="2025-05-16 09:40:54.532443032 +0000 UTC m=+71.450391673" May 16 09:40:55.959169 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:49858.service - OpenSSH per-connection server daemon (10.0.0.1:49858). May 16 09:40:56.021489 sshd[4900]: Accepted publickey for core from 10.0.0.1 port 49858 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:56.023035 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:56.028635 systemd-logind[1496]: New session 18 of user core. May 16 09:40:56.037815 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 09:40:56.208157 sshd[4902]: Connection closed by 10.0.0.1 port 49858 May 16 09:40:56.208654 sshd-session[4900]: pam_unix(sshd:session): session closed for user core May 16 09:40:56.219803 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:49858.service: Deactivated successfully. May 16 09:40:56.221324 systemd[1]: session-18.scope: Deactivated successfully. May 16 09:40:56.222109 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. May 16 09:40:56.225307 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:49862.service - OpenSSH per-connection server daemon (10.0.0.1:49862). May 16 09:40:56.227296 systemd-logind[1496]: Removed session 18. May 16 09:40:56.279351 sshd[4917]: Accepted publickey for core from 10.0.0.1 port 49862 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:56.280724 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:56.286064 systemd-logind[1496]: New session 19 of user core. May 16 09:40:56.294777 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 09:40:56.581639 sshd[4919]: Connection closed by 10.0.0.1 port 49862 May 16 09:40:56.582975 sshd-session[4917]: pam_unix(sshd:session): session closed for user core May 16 09:40:56.595374 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:49862.service: Deactivated successfully. May 16 09:40:56.597200 systemd[1]: session-19.scope: Deactivated successfully. May 16 09:40:56.598437 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. May 16 09:40:56.601500 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:49872.service - OpenSSH per-connection server daemon (10.0.0.1:49872). May 16 09:40:56.602454 systemd-logind[1496]: Removed session 19. May 16 09:40:56.659376 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 49872 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:56.660967 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:56.667521 systemd-logind[1496]: New session 20 of user core. May 16 09:40:56.679284 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 09:40:57.852595 containerd[1514]: time="2025-05-16T09:40:57.852539989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:57.853862 containerd[1514]: time="2025-05-16T09:40:57.853823337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 16 09:40:57.854823 containerd[1514]: time="2025-05-16T09:40:57.854790959Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:57.861520 containerd[1514]: time="2025-05-16T09:40:57.861413666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:40:57.863041 containerd[1514]: time="2025-05-16T09:40:57.863013141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 4.499895454s" May 16 09:40:57.863041 containerd[1514]: time="2025-05-16T09:40:57.863043222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 16 09:40:57.866960 containerd[1514]: time="2025-05-16T09:40:57.866577220Z" level=info msg="CreateContainer within sandbox \"9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 09:40:57.876094 containerd[1514]: time="2025-05-16T09:40:57.876056270Z" level=info msg="Container 807a7e9563d079078475bbd9b64fc02e0c3b92b465017ece6350645bda7b286e: CDI devices from CRI Config.CDIDevices: []" May 16 09:40:57.881262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577501445.mount: Deactivated successfully. May 16 09:40:57.887343 containerd[1514]: time="2025-05-16T09:40:57.887269759Z" level=info msg="CreateContainer within sandbox \"9d71af8b92c9bbb8a568f263ccc39df68f070fd10eb80291db57ef5b6b2db5bb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"807a7e9563d079078475bbd9b64fc02e0c3b92b465017ece6350645bda7b286e\"" May 16 09:40:57.887916 containerd[1514]: time="2025-05-16T09:40:57.887891133Z" level=info msg="StartContainer for \"807a7e9563d079078475bbd9b64fc02e0c3b92b465017ece6350645bda7b286e\"" May 16 09:40:57.889272 containerd[1514]: time="2025-05-16T09:40:57.889246763Z" level=info msg="connecting to shim 807a7e9563d079078475bbd9b64fc02e0c3b92b465017ece6350645bda7b286e" address="unix:///run/containerd/s/9695cfdc733b31f70f18aaf8b13e2db6608dfcbaf6f631b4f9d4f5d3336378ae" protocol=ttrpc version=3 May 16 09:40:57.914750 systemd[1]: Started cri-containerd-807a7e9563d079078475bbd9b64fc02e0c3b92b465017ece6350645bda7b286e.scope - libcontainer container 807a7e9563d079078475bbd9b64fc02e0c3b92b465017ece6350645bda7b286e. May 16 09:40:57.962288 containerd[1514]: time="2025-05-16T09:40:57.962238421Z" level=info msg="StartContainer for \"807a7e9563d079078475bbd9b64fc02e0c3b92b465017ece6350645bda7b286e\" returns successfully" May 16 09:40:58.263957 kubelet[2634]: I0516 09:40:58.263850 2634 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 09:40:58.266185 kubelet[2634]: I0516 09:40:58.265950 2634 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 09:40:58.319255 sshd[4932]: Connection closed by 10.0.0.1 port 49872 May 16 09:40:58.319969 sshd-session[4930]: pam_unix(sshd:session): session closed for user core May 16 09:40:58.333683 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:49872.service: Deactivated successfully. May 16 09:40:58.338276 systemd[1]: session-20.scope: Deactivated successfully. May 16 09:40:58.341503 systemd[1]: session-20.scope: Consumed 518ms CPU time, 69M memory peak. May 16 09:40:58.344982 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. May 16 09:40:58.349537 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:49882.service - OpenSSH per-connection server daemon (10.0.0.1:49882). May 16 09:40:58.351858 systemd-logind[1496]: Removed session 20. May 16 09:40:58.420533 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 49882 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:58.422415 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:58.428933 systemd-logind[1496]: New session 21 of user core. May 16 09:40:58.432756 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 09:40:58.510980 kubelet[2634]: I0516 09:40:58.510681 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zfjpx" podStartSLOduration=36.033890253 podStartE2EDuration="1m1.51066179s" podCreationTimestamp="2025-05-16 09:39:57 +0000 UTC" firstStartedPulling="2025-05-16 09:40:32.388553255 +0000 UTC m=+49.306501936" lastFinishedPulling="2025-05-16 09:40:57.865324792 +0000 UTC m=+74.783273473" observedRunningTime="2025-05-16 09:40:58.509919854 +0000 UTC m=+75.427868535" watchObservedRunningTime="2025-05-16 09:40:58.51066179 +0000 UTC m=+75.428610471" May 16 09:40:58.722169 sshd[4993]: Connection closed by 10.0.0.1 port 49882 May 16 09:40:58.723646 sshd-session[4991]: pam_unix(sshd:session): session closed for user core May 16 09:40:58.732956 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:49882.service: Deactivated successfully. May 16 09:40:58.737261 systemd[1]: session-21.scope: Deactivated successfully. May 16 09:40:58.738115 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. May 16 09:40:58.741733 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:49896.service - OpenSSH per-connection server daemon (10.0.0.1:49896). May 16 09:40:58.745032 systemd-logind[1496]: Removed session 21. May 16 09:40:58.803712 sshd[5005]: Accepted publickey for core from 10.0.0.1 port 49896 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:40:58.805029 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:40:58.809960 systemd-logind[1496]: New session 22 of user core. May 16 09:40:58.819786 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 09:40:58.952173 sshd[5007]: Connection closed by 10.0.0.1 port 49896 May 16 09:40:58.952977 sshd-session[5005]: pam_unix(sshd:session): session closed for user core May 16 09:40:58.956985 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:49896.service: Deactivated successfully. May 16 09:40:58.959427 systemd[1]: session-22.scope: Deactivated successfully. May 16 09:40:58.960302 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. May 16 09:40:58.962698 systemd-logind[1496]: Removed session 22. May 16 09:41:03.964442 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:43724.service - OpenSSH per-connection server daemon (10.0.0.1:43724). May 16 09:41:04.011656 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 43724 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:41:04.012811 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:41:04.016855 systemd-logind[1496]: New session 23 of user core. May 16 09:41:04.027839 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 09:41:04.157296 sshd[5026]: Connection closed by 10.0.0.1 port 43724 May 16 09:41:04.157653 sshd-session[5024]: pam_unix(sshd:session): session closed for user core May 16 09:41:04.161093 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:43724.service: Deactivated successfully. May 16 09:41:04.163135 systemd[1]: session-23.scope: Deactivated successfully. May 16 09:41:04.164281 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. May 16 09:41:04.165675 systemd-logind[1496]: Removed session 23. May 16 09:41:09.173344 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:43730.service - OpenSSH per-connection server daemon (10.0.0.1:43730). May 16 09:41:09.221981 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 43730 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:41:09.223224 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:41:09.229646 systemd-logind[1496]: New session 24 of user core. May 16 09:41:09.237824 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 09:41:09.254670 containerd[1514]: time="2025-05-16T09:41:09.254607419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ef18e88459ce97e1ca86f1fada7678d89f6acdc07e27a45b9fd16eceef2e2aa\" id:\"7d998c326c5545c8125121009ed613b03330fcc477066fd6050381cd447b3a10\" pid:5063 exited_at:{seconds:1747388469 nanos:254288734}" May 16 09:41:09.362145 sshd[5075]: Connection closed by 10.0.0.1 port 43730 May 16 09:41:09.362818 sshd-session[5049]: pam_unix(sshd:session): session closed for user core May 16 09:41:09.366221 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:43730.service: Deactivated successfully. May 16 09:41:09.368287 systemd[1]: session-24.scope: Deactivated successfully. May 16 09:41:09.369082 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. May 16 09:41:09.370121 systemd-logind[1496]: Removed session 24. May 16 09:41:14.376124 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:60484.service - OpenSSH per-connection server daemon (10.0.0.1:60484). May 16 09:41:14.424095 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 60484 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:41:14.428413 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:41:14.433523 systemd-logind[1496]: New session 25 of user core. May 16 09:41:14.438731 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 09:41:14.574599 sshd[5094]: Connection closed by 10.0.0.1 port 60484 May 16 09:41:14.575099 sshd-session[5092]: pam_unix(sshd:session): session closed for user core May 16 09:41:14.578243 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:60484.service: Deactivated successfully. May 16 09:41:14.580120 systemd[1]: session-25.scope: Deactivated successfully. May 16 09:41:14.580875 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. May 16 09:41:14.582496 systemd-logind[1496]: Removed session 25. May 16 09:41:16.694940 containerd[1514]: time="2025-05-16T09:41:16.694897693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ceb349c799b666e5f59518377f6943195725723f2014c7bbb4441615219afcd2\" id:\"2765aecef22b305a665650df816f527a11048b1a67bece0a524e2ad925947221\" pid:5118 exited_at:{seconds:1747388476 nanos:694656490}" May 16 09:41:19.593924 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:60490.service - OpenSSH per-connection server daemon (10.0.0.1:60490). May 16 09:41:19.647880 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 60490 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:41:19.649151 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:41:19.654214 systemd-logind[1496]: New session 26 of user core. May 16 09:41:19.665737 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 09:41:19.788816 sshd[5131]: Connection closed by 10.0.0.1 port 60490 May 16 09:41:19.788921 sshd-session[5129]: pam_unix(sshd:session): session closed for user core May 16 09:41:19.793385 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:60490.service: Deactivated successfully. May 16 09:41:19.796124 systemd[1]: session-26.scope: Deactivated successfully. May 16 09:41:19.797759 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. May 16 09:41:19.799354 systemd-logind[1496]: Removed session 26. May 16 09:41:24.802078 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:39470.service - OpenSSH per-connection server daemon (10.0.0.1:39470). May 16 09:41:24.864049 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 39470 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:41:24.865209 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:41:24.869035 systemd-logind[1496]: New session 27 of user core. May 16 09:41:24.877749 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 09:41:25.006362 sshd[5149]: Connection closed by 10.0.0.1 port 39470 May 16 09:41:25.005528 sshd-session[5147]: pam_unix(sshd:session): session closed for user core May 16 09:41:25.008995 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:39470.service: Deactivated successfully. May 16 09:41:25.012052 systemd[1]: session-27.scope: Deactivated successfully. May 16 09:41:25.013134 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. May 16 09:41:25.017665 systemd-logind[1496]: Removed session 27.