May 14 00:00:12.915885 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:00:12.915909 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 14 00:00:12.915920 kernel: KASLR enabled May 14 00:00:12.915926 kernel: efi: EFI v2.7 by EDK II May 14 00:00:12.915932 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 14 00:00:12.915937 kernel: random: crng init done May 14 00:00:12.915944 kernel: secureboot: Secure boot disabled May 14 00:00:12.915950 kernel: ACPI: Early table checksum verification disabled May 14 00:00:12.915957 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 14 00:00:12.915964 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:00:12.915971 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.915977 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.915983 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.915989 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.915997 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.916005 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.916011 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.916018 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.916026 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:12.916045 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:00:12.916052 kernel: NUMA: Failed to initialise from firmware May 14 00:00:12.916059 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:00:12.916065 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 14 00:00:12.916072 kernel: Zone ranges: May 14 00:00:12.916078 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:00:12.916085 kernel: DMA32 empty May 14 00:00:12.916092 kernel: Normal empty May 14 00:00:12.916098 kernel: Movable zone start for each node May 14 00:00:12.916104 kernel: Early memory node ranges May 14 00:00:12.916110 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 14 00:00:12.916117 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 14 00:00:12.916123 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 14 00:00:12.916129 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 14 00:00:12.916136 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 14 00:00:12.916146 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 00:00:12.916154 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 00:00:12.916160 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 00:00:12.916169 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 00:00:12.916175 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:00:12.916181 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:00:12.916191 kernel: psci: probing for conduit method from ACPI. May 14 00:00:12.916197 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:00:12.916204 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:00:12.916212 kernel: psci: Trusted OS migration not required May 14 00:00:12.916218 kernel: psci: SMC Calling Convention v1.1 May 14 00:00:12.916225 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:00:12.916232 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 00:00:12.916238 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 00:00:12.916245 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:00:12.916252 kernel: Detected PIPT I-cache on CPU0 May 14 00:00:12.916258 kernel: CPU features: detected: GIC system register CPU interface May 14 00:00:12.916265 kernel: CPU features: detected: Hardware dirty bit management May 14 00:00:12.916271 kernel: CPU features: detected: Spectre-v4 May 14 00:00:12.916279 kernel: CPU features: detected: Spectre-BHB May 14 00:00:12.916286 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:00:12.916293 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:00:12.916299 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:00:12.916306 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:00:12.916313 kernel: alternatives: applying boot alternatives May 14 00:00:12.916320 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 14 00:00:12.916327 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:00:12.916334 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:00:12.916340 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:00:12.916347 kernel: Fallback order for Node 0: 0 May 14 00:00:12.916355 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:00:12.916385 kernel: Policy zone: DMA May 14 00:00:12.916394 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:00:12.916401 kernel: software IO TLB: area num 4. May 14 00:00:12.916410 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 14 00:00:12.916417 kernel: Memory: 2387344K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 184944K reserved, 0K cma-reserved) May 14 00:00:12.916424 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:00:12.916431 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:00:12.916438 kernel: rcu: RCU event tracing is enabled. May 14 00:00:12.916445 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:00:12.916451 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:00:12.916458 kernel: Tracing variant of Tasks RCU enabled. May 14 00:00:12.916468 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:00:12.916475 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:00:12.916482 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:00:12.916488 kernel: GICv3: 256 SPIs implemented May 14 00:00:12.916494 kernel: GICv3: 0 Extended SPIs implemented May 14 00:00:12.916501 kernel: Root IRQ handler: gic_handle_irq May 14 00:00:12.916507 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 00:00:12.916514 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:00:12.916520 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:00:12.916527 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:00:12.916534 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 14 00:00:12.916542 kernel: GICv3: using LPI property table @0x00000000400f0000 May 14 00:00:12.916549 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 14 00:00:12.916556 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 00:00:12.916562 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:00:12.916569 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:00:12.916576 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:00:12.916583 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:00:12.916589 kernel: arm-pv: using stolen time PV May 14 00:00:12.916597 kernel: Console: colour dummy device 80x25 May 14 00:00:12.916604 kernel: ACPI: Core revision 20230628 May 14 00:00:12.916611 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:00:12.916620 kernel: pid_max: default: 32768 minimum: 301 May 14 00:00:12.916627 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 00:00:12.916634 kernel: landlock: Up and running. May 14 00:00:12.916641 kernel: SELinux: Initializing. May 14 00:00:12.916648 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:00:12.916655 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:00:12.916662 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:00:12.916669 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:00:12.916677 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:00:12.916685 kernel: rcu: Hierarchical SRCU implementation. May 14 00:00:12.916693 kernel: rcu: Max phase no-delay instances is 400. May 14 00:00:12.916700 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:00:12.916707 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:00:12.916714 kernel: Remapping and enabling EFI services. May 14 00:00:12.916721 kernel: smp: Bringing up secondary CPUs ... May 14 00:00:12.916728 kernel: Detected PIPT I-cache on CPU1 May 14 00:00:12.916735 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:00:12.916743 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 14 00:00:12.916752 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:00:12.916759 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:00:12.916771 kernel: Detected PIPT I-cache on CPU2 May 14 00:00:12.916784 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:00:12.916792 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 14 00:00:12.916800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:00:12.916807 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:00:12.916815 kernel: Detected PIPT I-cache on CPU3 May 14 00:00:12.916822 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:00:12.916830 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 14 00:00:12.916844 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:00:12.916855 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:00:12.916866 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:00:12.916877 kernel: SMP: Total of 4 processors activated. May 14 00:00:12.916888 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:00:12.916900 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:00:12.916910 kernel: CPU features: detected: Common not Private translations May 14 00:00:12.916921 kernel: CPU features: detected: CRC32 instructions May 14 00:00:12.916935 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 00:00:12.916944 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:00:12.916951 kernel: CPU features: detected: LSE atomic instructions May 14 00:00:12.916960 kernel: CPU features: detected: Privileged Access Never May 14 00:00:12.916970 kernel: CPU features: detected: RAS Extension Support May 14 00:00:12.916981 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:00:12.916990 kernel: CPU: All CPU(s) started at EL1 May 14 00:00:12.917004 kernel: alternatives: applying system-wide alternatives May 14 00:00:12.917017 kernel: devtmpfs: initialized May 14 00:00:12.917029 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:00:12.917048 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:00:12.917061 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:00:12.917072 kernel: SMBIOS 3.0.0 present. May 14 00:00:12.917083 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 00:00:12.917096 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:00:12.917107 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:00:12.917119 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:00:12.917128 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:00:12.917140 kernel: audit: initializing netlink subsys (disabled) May 14 00:00:12.917158 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 May 14 00:00:12.917168 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:00:12.917182 kernel: cpuidle: using governor menu May 14 00:00:12.917195 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:00:12.917207 kernel: ASID allocator initialised with 32768 entries May 14 00:00:12.917218 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:00:12.917227 kernel: Serial: AMBA PL011 UART driver May 14 00:00:12.917239 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 00:00:12.917251 kernel: Modules: 0 pages in range for non-PLT usage May 14 00:00:12.917260 kernel: Modules: 509232 pages in range for PLT usage May 14 00:00:12.917268 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:00:12.917276 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 00:00:12.917283 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:00:12.917291 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 00:00:12.917298 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:00:12.917306 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 00:00:12.917315 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:00:12.917323 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 00:00:12.917331 kernel: ACPI: Added _OSI(Module Device) May 14 00:00:12.917338 kernel: ACPI: Added _OSI(Processor Device) May 14 00:00:12.917345 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:00:12.917353 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:00:12.917367 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:00:12.917376 kernel: ACPI: Interpreter enabled May 14 00:00:12.917388 kernel: ACPI: Using GIC for interrupt routing May 14 00:00:12.917395 kernel: ACPI: MCFG table detected, 1 entries May 14 00:00:12.917406 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:00:12.917413 kernel: printk: console [ttyAMA0] enabled May 14 00:00:12.917420 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:00:12.917581 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:00:12.917664 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:00:12.917736 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:00:12.917804 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:00:12.917874 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:00:12.917885 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:00:12.917892 kernel: PCI host bridge to bus 0000:00 May 14 00:00:12.917970 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:00:12.918037 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:00:12.918101 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:00:12.918175 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:00:12.918268 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:00:12.918347 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:00:12.918447 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:00:12.918521 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:00:12.918592 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:00:12.918661 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:00:12.918728 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:00:12.918799 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:00:12.918861 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:00:12.918919 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:00:12.918977 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:00:12.918987 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:00:12.918994 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:00:12.919002 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:00:12.919011 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:00:12.919018 kernel: iommu: Default domain type: Translated May 14 00:00:12.919025 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:00:12.919032 kernel: efivars: Registered efivars operations May 14 00:00:12.919039 kernel: vgaarb: loaded May 14 00:00:12.919046 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:00:12.919054 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:00:12.919061 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:00:12.919068 kernel: pnp: PnP ACPI init May 14 00:00:12.919152 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:00:12.919163 kernel: pnp: PnP ACPI: found 1 devices May 14 00:00:12.919170 kernel: NET: Registered PF_INET protocol family May 14 00:00:12.919177 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:00:12.919185 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:00:12.919192 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:00:12.919199 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:00:12.919206 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 00:00:12.919215 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:00:12.919223 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:00:12.919230 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:00:12.919237 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:00:12.919244 kernel: PCI: CLS 0 bytes, default 64 May 14 00:00:12.919251 kernel: kvm [1]: HYP mode not available May 14 00:00:12.919258 kernel: Initialise system trusted keyrings May 14 00:00:12.919265 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:00:12.919272 kernel: Key type asymmetric registered May 14 00:00:12.919280 kernel: Asymmetric key parser 'x509' registered May 14 00:00:12.919287 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 00:00:12.919295 kernel: io scheduler mq-deadline registered May 14 00:00:12.919302 kernel: io scheduler kyber registered May 14 00:00:12.919309 kernel: io scheduler bfq registered May 14 00:00:12.919316 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:00:12.919323 kernel: ACPI: button: Power Button [PWRB] May 14 00:00:12.919330 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:00:12.919428 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:00:12.919442 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:00:12.919449 kernel: thunder_xcv, ver 1.0 May 14 00:00:12.919456 kernel: thunder_bgx, ver 1.0 May 14 00:00:12.919463 kernel: nicpf, ver 1.0 May 14 00:00:12.919470 kernel: nicvf, ver 1.0 May 14 00:00:12.919542 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:00:12.919606 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:00:12 UTC (1747180812) May 14 00:00:12.919616 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:00:12.919625 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:00:12.919632 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 00:00:12.919639 kernel: watchdog: Hard watchdog permanently disabled May 14 00:00:12.919647 kernel: NET: Registered PF_INET6 protocol family May 14 00:00:12.919654 kernel: Segment Routing with IPv6 May 14 00:00:12.919661 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:00:12.919668 kernel: NET: Registered PF_PACKET protocol family May 14 00:00:12.919675 kernel: Key type dns_resolver registered May 14 00:00:12.919682 kernel: registered taskstats version 1 May 14 00:00:12.919689 kernel: Loading compiled-in X.509 certificates May 14 00:00:12.919698 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 14 00:00:12.919705 kernel: Key type .fscrypt registered May 14 00:00:12.919713 kernel: Key type fscrypt-provisioning registered May 14 00:00:12.919720 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:00:12.919727 kernel: ima: Allocated hash algorithm: sha1 May 14 00:00:12.919734 kernel: ima: No architecture policies found May 14 00:00:12.919741 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:00:12.919748 kernel: clk: Disabling unused clocks May 14 00:00:12.919757 kernel: Freeing unused kernel memory: 38464K May 14 00:00:12.919764 kernel: Run /init as init process May 14 00:00:12.919771 kernel: with arguments: May 14 00:00:12.919781 kernel: /init May 14 00:00:12.919791 kernel: with environment: May 14 00:00:12.919798 kernel: HOME=/ May 14 00:00:12.919805 kernel: TERM=linux May 14 00:00:12.919812 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:00:12.919820 systemd[1]: Successfully made /usr/ read-only. May 14 00:00:12.919832 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:00:12.919841 systemd[1]: Detected virtualization kvm. May 14 00:00:12.919849 systemd[1]: Detected architecture arm64. May 14 00:00:12.919857 systemd[1]: Running in initrd. May 14 00:00:12.919865 systemd[1]: No hostname configured, using default hostname. May 14 00:00:12.919873 systemd[1]: Hostname set to . May 14 00:00:12.919881 systemd[1]: Initializing machine ID from VM UUID. May 14 00:00:12.919891 systemd[1]: Queued start job for default target initrd.target. May 14 00:00:12.919899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:00:12.919907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:00:12.919916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 00:00:12.919924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:00:12.919933 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 00:00:12.919941 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 00:00:12.919952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 00:00:12.919961 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 00:00:12.919969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:00:12.919977 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:00:12.919985 systemd[1]: Reached target paths.target - Path Units. May 14 00:00:12.919993 systemd[1]: Reached target slices.target - Slice Units. May 14 00:00:12.920001 systemd[1]: Reached target swap.target - Swaps. May 14 00:00:12.920009 systemd[1]: Reached target timers.target - Timer Units. May 14 00:00:12.920019 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:00:12.920028 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:00:12.920036 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 00:00:12.920044 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 00:00:12.920053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:00:12.920061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:00:12.920070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:00:12.920078 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:00:12.920086 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 00:00:12.920097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:00:12.920105 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 00:00:12.920114 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:00:12.920122 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:00:12.920130 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:00:12.920139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:12.920150 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 00:00:12.920158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:00:12.920168 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:00:12.920176 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:00:12.920203 systemd-journald[237]: Collecting audit messages is disabled. May 14 00:00:12.920224 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:00:12.920232 kernel: Bridge firewalling registered May 14 00:00:12.920240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:12.920249 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:00:12.920258 systemd-journald[237]: Journal started May 14 00:00:12.920278 systemd-journald[237]: Runtime Journal (/run/log/journal/f801c37aa7dc4eda896afb9d2a3bd5f0) is 5.9M, max 47.3M, 41.4M free. May 14 00:00:12.897113 systemd-modules-load[238]: Inserted module 'overlay' May 14 00:00:12.922009 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:00:12.913931 systemd-modules-load[238]: Inserted module 'br_netfilter' May 14 00:00:12.923051 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:00:12.926883 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:12.928682 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:00:12.931520 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:00:12.935994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:00:12.943368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:00:12.945883 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:12.947894 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:00:12.950297 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:00:12.954563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:12.956242 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 00:00:12.976046 dracut-cmdline[279]: dracut-dracut-053 May 14 00:00:12.978502 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 14 00:00:13.001497 systemd-resolved[277]: Positive Trust Anchors: May 14 00:00:13.001511 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:00:13.001544 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:00:13.008247 systemd-resolved[277]: Defaulting to hostname 'linux'. May 14 00:00:13.009484 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:00:13.010404 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:00:13.062398 kernel: SCSI subsystem initialized May 14 00:00:13.067402 kernel: Loading iSCSI transport class v2.0-870. May 14 00:00:13.076414 kernel: iscsi: registered transport (tcp) May 14 00:00:13.089479 kernel: iscsi: registered transport (qla4xxx) May 14 00:00:13.089499 kernel: QLogic iSCSI HBA Driver May 14 00:00:13.131584 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 00:00:13.133457 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 00:00:13.168460 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:00:13.168517 kernel: device-mapper: uevent: version 1.0.3 May 14 00:00:13.169515 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 00:00:13.216413 kernel: raid6: neonx8 gen() 15729 MB/s May 14 00:00:13.233388 kernel: raid6: neonx4 gen() 15757 MB/s May 14 00:00:13.250400 kernel: raid6: neonx2 gen() 13192 MB/s May 14 00:00:13.267389 kernel: raid6: neonx1 gen() 10513 MB/s May 14 00:00:13.284389 kernel: raid6: int64x8 gen() 6786 MB/s May 14 00:00:13.301392 kernel: raid6: int64x4 gen() 7341 MB/s May 14 00:00:13.318383 kernel: raid6: int64x2 gen() 6079 MB/s May 14 00:00:13.335561 kernel: raid6: int64x1 gen() 5033 MB/s May 14 00:00:13.335592 kernel: raid6: using algorithm neonx4 gen() 15757 MB/s May 14 00:00:13.353527 kernel: raid6: .... xor() 12229 MB/s, rmw enabled May 14 00:00:13.353555 kernel: raid6: using neon recovery algorithm May 14 00:00:13.358765 kernel: xor: measuring software checksum speed May 14 00:00:13.358786 kernel: 8regs : 21573 MB/sec May 14 00:00:13.359452 kernel: 32regs : 21658 MB/sec May 14 00:00:13.360681 kernel: arm64_neon : 27917 MB/sec May 14 00:00:13.360693 kernel: xor: using function: arm64_neon (27917 MB/sec) May 14 00:00:13.411410 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 00:00:13.421726 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 00:00:13.423930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:00:13.448627 systemd-udevd[464]: Using default interface naming scheme 'v255'. May 14 00:00:13.452269 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:00:13.454930 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 00:00:13.479285 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation May 14 00:00:13.504868 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:00:13.507026 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:00:13.565240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:00:13.570480 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 00:00:13.589880 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 00:00:13.591976 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:00:13.593356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:00:13.596184 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:00:13.598553 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 00:00:13.618412 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 00:00:13.623787 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 00:00:13.623941 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:00:13.626848 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:00:13.626887 kernel: GPT:9289727 != 19775487 May 14 00:00:13.626899 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:00:13.626908 kernel: GPT:9289727 != 19775487 May 14 00:00:13.629461 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:00:13.629495 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:13.631936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:00:13.632052 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:13.635116 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:13.636146 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:00:13.636420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:13.639800 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:13.643162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:13.656054 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (523) May 14 00:00:13.656101 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (512) May 14 00:00:13.666468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:13.674093 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 00:00:13.689306 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 00:00:13.698259 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 00:00:13.699380 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 00:00:13.708339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:00:13.710155 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 00:00:13.711909 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:13.729845 disk-uuid[553]: Primary Header is updated. May 14 00:00:13.729845 disk-uuid[553]: Secondary Entries is updated. May 14 00:00:13.729845 disk-uuid[553]: Secondary Header is updated. May 14 00:00:13.736392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:13.742516 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:14.750471 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:14.751282 disk-uuid[555]: The operation has completed successfully. May 14 00:00:14.770907 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:00:14.771008 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 00:00:14.802452 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 00:00:14.820254 sh[574]: Success May 14 00:00:14.837237 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:00:14.865605 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 00:00:14.868090 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 00:00:14.881405 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 00:00:14.888453 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 14 00:00:14.888490 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 00:00:14.888510 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 00:00:14.889529 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 00:00:14.890942 kernel: BTRFS info (device dm-0): using free space tree May 14 00:00:14.894497 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 00:00:14.895657 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 00:00:14.896417 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 00:00:14.898912 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 00:00:14.920608 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:14.920649 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:00:14.920661 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:14.923576 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:14.927398 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:14.930249 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 00:00:14.932215 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 00:00:15.000873 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:00:15.004528 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:00:15.036962 ignition[665]: Ignition 2.20.0 May 14 00:00:15.036975 ignition[665]: Stage: fetch-offline May 14 00:00:15.037005 ignition[665]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:15.037013 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:15.037165 ignition[665]: parsed url from cmdline: "" May 14 00:00:15.037168 ignition[665]: no config URL provided May 14 00:00:15.037172 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:00:15.037179 ignition[665]: no config at "/usr/lib/ignition/user.ign" May 14 00:00:15.037243 ignition[665]: op(1): [started] loading QEMU firmware config module May 14 00:00:15.037248 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:00:15.048920 ignition[665]: op(1): [finished] loading QEMU firmware config module May 14 00:00:15.049493 systemd-networkd[763]: lo: Link UP May 14 00:00:15.049497 systemd-networkd[763]: lo: Gained carrier May 14 00:00:15.050285 systemd-networkd[763]: Enumeration completed May 14 00:00:15.050702 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:15.050706 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:00:15.051294 systemd-networkd[763]: eth0: Link UP May 14 00:00:15.051297 systemd-networkd[763]: eth0: Gained carrier May 14 00:00:15.051303 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:15.051467 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:00:15.053382 systemd[1]: Reached target network.target - Network. May 14 00:00:15.070406 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:00:15.098616 ignition[665]: parsing config with SHA512: 4770a7c57e9d0f413999c5d612265585fbe2dc0e5744a74eeb056e2e7b4d204fa60a46d5a3dbca9242e6300196c7d876819149724f90462b5de2ada3a518b4fe May 14 00:00:15.104961 unknown[665]: fetched base config from "system" May 14 00:00:15.104974 unknown[665]: fetched user config from "qemu" May 14 00:00:15.105766 ignition[665]: fetch-offline: fetch-offline passed May 14 00:00:15.105844 ignition[665]: Ignition finished successfully May 14 00:00:15.107296 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:00:15.108962 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:00:15.109775 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 00:00:15.133050 ignition[771]: Ignition 2.20.0 May 14 00:00:15.133060 ignition[771]: Stage: kargs May 14 00:00:15.133222 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:15.133232 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:15.134115 ignition[771]: kargs: kargs passed May 14 00:00:15.137721 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 00:00:15.134166 ignition[771]: Ignition finished successfully May 14 00:00:15.139524 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 00:00:15.159962 ignition[779]: Ignition 2.20.0 May 14 00:00:15.159979 ignition[779]: Stage: disks May 14 00:00:15.160143 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:15.162634 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 00:00:15.160153 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:15.163951 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 00:00:15.161029 ignition[779]: disks: disks passed May 14 00:00:15.165407 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 00:00:15.161072 ignition[779]: Ignition finished successfully May 14 00:00:15.167041 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:00:15.168591 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:00:15.169768 systemd[1]: Reached target basic.target - Basic System. May 14 00:00:15.172080 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 00:00:15.193309 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.146 May 14 00:00:15.193324 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 14 00:00:15.196165 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 00:00:15.200285 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 00:00:15.202749 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 00:00:15.259386 kernel: EXT4-fs (vda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 14 00:00:15.260040 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 00:00:15.261263 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 00:00:15.263431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:00:15.264821 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 00:00:15.265675 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 00:00:15.265712 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:00:15.265734 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:00:15.275155 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 00:00:15.277909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 00:00:15.283055 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) May 14 00:00:15.283079 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:15.283090 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:00:15.283099 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:15.286445 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:15.286675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:00:15.320871 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:00:15.324924 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 14 00:00:15.328934 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:00:15.332005 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:00:15.405582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 00:00:15.407293 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 00:00:15.408867 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 00:00:15.427406 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:15.442558 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 00:00:15.452216 ignition[911]: INFO : Ignition 2.20.0 May 14 00:00:15.452216 ignition[911]: INFO : Stage: mount May 14 00:00:15.453580 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:15.453580 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:15.453580 ignition[911]: INFO : mount: mount passed May 14 00:00:15.453580 ignition[911]: INFO : Ignition finished successfully May 14 00:00:15.456754 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 00:00:15.458453 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 00:00:15.895093 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 00:00:15.896549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:00:15.913377 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) May 14 00:00:15.915522 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:15.915538 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:00:15.916375 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:15.918385 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:15.919497 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:00:15.945316 ignition[943]: INFO : Ignition 2.20.0 May 14 00:00:15.945316 ignition[943]: INFO : Stage: files May 14 00:00:15.946630 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:15.946630 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:15.946630 ignition[943]: DEBUG : files: compiled without relabeling support, skipping May 14 00:00:15.949236 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:00:15.949236 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:00:15.949236 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:00:15.949236 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:00:15.954194 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:00:15.954194 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:00:15.954194 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 00:00:15.949531 unknown[943]: wrote ssh authorized keys file for user: core May 14 00:00:15.997254 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:00:16.130927 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:00:16.133575 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 00:00:16.323668 systemd-networkd[763]: eth0: Gained IPv6LL May 14 00:00:16.382308 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 00:00:16.715484 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:00:16.715484 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 00:00:16.718551 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:00:16.718551 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:00:16.718551 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 00:00:16.718551 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 00:00:16.718551 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:00:16.718551 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:00:16.718551 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 00:00:16.718551 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:00:16.734447 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:00:16.737793 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:00:16.739279 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:00:16.739279 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 00:00:16.739279 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:00:16.739279 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:00:16.739279 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:00:16.739279 ignition[943]: INFO : files: files passed May 14 00:00:16.739279 ignition[943]: INFO : Ignition finished successfully May 14 00:00:16.741422 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 00:00:16.744504 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 00:00:16.746929 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 00:00:16.757059 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:00:16.757183 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 00:00:16.760152 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 14 00:00:16.762582 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:16.762582 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:16.765055 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:16.764405 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:00:16.767621 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 00:00:16.770499 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 00:00:16.801821 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:00:16.801931 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 00:00:16.803630 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 00:00:16.805244 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 00:00:16.806826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 00:00:16.807591 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 00:00:16.834940 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:00:16.837221 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 00:00:16.857079 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 00:00:16.858297 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:00:16.860139 systemd[1]: Stopped target timers.target - Timer Units. May 14 00:00:16.861625 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:00:16.861751 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:00:16.863944 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 00:00:16.865530 systemd[1]: Stopped target basic.target - Basic System. May 14 00:00:16.866906 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 00:00:16.868476 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:00:16.870462 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 00:00:16.872479 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 00:00:16.874339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:00:16.876240 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 00:00:16.878138 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 00:00:16.879939 systemd[1]: Stopped target swap.target - Swaps. May 14 00:00:16.881150 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:00:16.881314 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 00:00:16.883398 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 00:00:16.885521 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:00:16.887432 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 00:00:16.888485 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:00:16.890465 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:00:16.890595 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 00:00:16.893226 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:00:16.893385 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:00:16.894984 systemd[1]: Stopped target paths.target - Path Units. May 14 00:00:16.896285 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:00:16.897442 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:00:16.898974 systemd[1]: Stopped target slices.target - Slice Units. May 14 00:00:16.900792 systemd[1]: Stopped target sockets.target - Socket Units. May 14 00:00:16.902626 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:00:16.902755 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:00:16.903983 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:00:16.904069 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:00:16.905266 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:00:16.905432 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:00:16.906831 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:00:16.906934 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 00:00:16.909287 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 00:00:16.910336 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:00:16.910487 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:00:16.913204 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 00:00:16.914056 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:00:16.914194 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:00:16.915625 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:00:16.915727 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:00:16.921620 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:00:16.921711 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 00:00:16.930087 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:00:16.934255 ignition[999]: INFO : Ignition 2.20.0 May 14 00:00:16.934255 ignition[999]: INFO : Stage: umount May 14 00:00:16.934255 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:16.934255 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:16.938357 ignition[999]: INFO : umount: umount passed May 14 00:00:16.938357 ignition[999]: INFO : Ignition finished successfully May 14 00:00:16.935960 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:00:16.936059 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 00:00:16.937503 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:00:16.937587 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 00:00:16.939797 systemd[1]: Stopped target network.target - Network. May 14 00:00:16.941146 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:00:16.941223 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 00:00:16.942557 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:00:16.942601 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 00:00:16.943957 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:00:16.943999 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 00:00:16.945415 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 00:00:16.945455 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 00:00:16.946983 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:00:16.947027 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 00:00:16.948727 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 00:00:16.950054 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 00:00:16.956174 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:00:16.956312 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 00:00:16.959678 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 00:00:16.959927 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 00:00:16.959964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:00:16.962631 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 00:00:16.963524 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:00:16.963646 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 00:00:16.966129 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 00:00:16.966270 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:00:16.966297 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 00:00:16.968093 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 00:00:16.969004 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:00:16.969053 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:00:16.970602 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:00:16.970645 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:16.973096 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:00:16.973143 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 00:00:16.974731 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:00:16.977227 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:00:16.999769 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:00:16.999915 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:00:17.002154 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:00:17.002214 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 00:00:17.003514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:00:17.003548 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:00:17.004878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:00:17.004922 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 00:00:17.007014 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:00:17.007059 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 00:00:17.009305 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:00:17.009376 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:17.012690 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 00:00:17.013546 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:00:17.013605 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:00:17.016211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:00:17.016255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:17.030615 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:00:17.030720 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 00:00:17.036138 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:00:17.036246 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 00:00:17.038419 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 00:00:17.040028 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 00:00:17.061629 systemd[1]: Switching root. May 14 00:00:17.085044 systemd-journald[237]: Journal stopped May 14 00:00:18.057044 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 14 00:00:18.057095 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:00:18.057107 kernel: SELinux: policy capability open_perms=1 May 14 00:00:18.057117 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:00:18.057127 kernel: SELinux: policy capability always_check_network=0 May 14 00:00:18.057136 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:00:18.057146 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:00:18.057156 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:00:18.057169 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:00:18.057179 kernel: audit: type=1403 audit(1747180817.337:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:00:18.057190 systemd[1]: Successfully loaded SELinux policy in 32.855ms. May 14 00:00:18.057209 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.979ms. May 14 00:00:18.057222 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:00:18.057234 systemd[1]: Detected virtualization kvm. May 14 00:00:18.057244 systemd[1]: Detected architecture arm64. May 14 00:00:18.057255 systemd[1]: Detected first boot. May 14 00:00:18.057267 systemd[1]: Initializing machine ID from VM UUID. May 14 00:00:18.057278 zram_generator::config[1046]: No configuration found. May 14 00:00:18.057290 kernel: NET: Registered PF_VSOCK protocol family May 14 00:00:18.057301 systemd[1]: Populated /etc with preset unit settings. May 14 00:00:18.057316 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 00:00:18.057327 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:00:18.057347 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 00:00:18.057359 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:00:18.057383 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 00:00:18.057396 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 00:00:18.057407 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 00:00:18.057417 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 00:00:18.057428 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 00:00:18.057438 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 00:00:18.057449 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 00:00:18.057460 systemd[1]: Created slice user.slice - User and Session Slice. May 14 00:00:18.057470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:00:18.057482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:00:18.057493 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 00:00:18.057503 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 00:00:18.057514 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 00:00:18.057525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:00:18.057536 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 00:00:18.057546 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:00:18.057557 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 00:00:18.057569 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 00:00:18.057580 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 00:00:18.057595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 00:00:18.057605 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:00:18.057616 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:00:18.057626 systemd[1]: Reached target slices.target - Slice Units. May 14 00:00:18.057637 systemd[1]: Reached target swap.target - Swaps. May 14 00:00:18.057647 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 00:00:18.057658 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 00:00:18.057670 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 00:00:18.057681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:00:18.057692 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:00:18.057703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:00:18.057714 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 00:00:18.057725 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 00:00:18.057737 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 00:00:18.057747 systemd[1]: Mounting media.mount - External Media Directory... May 14 00:00:18.057757 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 00:00:18.057769 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 00:00:18.057780 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 00:00:18.057791 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:00:18.057801 systemd[1]: Reached target machines.target - Containers. May 14 00:00:18.057811 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 00:00:18.057823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:00:18.057834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:00:18.057844 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 00:00:18.057855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:00:18.057867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:00:18.057877 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:00:18.057888 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 00:00:18.057898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:00:18.057909 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:00:18.057920 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:00:18.057930 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 00:00:18.057945 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:00:18.057956 kernel: fuse: init (API version 7.39) May 14 00:00:18.057966 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:00:18.057977 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:00:18.057988 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:00:18.057998 kernel: ACPI: bus type drm_connector registered May 14 00:00:18.058009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:00:18.058019 kernel: loop: module loaded May 14 00:00:18.058029 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 00:00:18.058040 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 00:00:18.058054 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 00:00:18.058065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:00:18.058076 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:00:18.058087 systemd[1]: Stopped verity-setup.service. May 14 00:00:18.058100 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 00:00:18.058111 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 00:00:18.058121 systemd[1]: Mounted media.mount - External Media Directory. May 14 00:00:18.058131 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 00:00:18.058142 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 00:00:18.058152 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 00:00:18.058180 systemd-journald[1121]: Collecting audit messages is disabled. May 14 00:00:18.058205 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 00:00:18.058218 systemd-journald[1121]: Journal started May 14 00:00:18.058239 systemd-journald[1121]: Runtime Journal (/run/log/journal/f801c37aa7dc4eda896afb9d2a3bd5f0) is 5.9M, max 47.3M, 41.4M free. May 14 00:00:17.823589 systemd[1]: Queued start job for default target multi-user.target. May 14 00:00:17.833665 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 00:00:17.834074 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:00:18.060481 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:00:18.062425 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:00:18.062972 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:00:18.063147 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 00:00:18.064549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:00:18.064723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:00:18.065953 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:00:18.066127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:00:18.067279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:00:18.067488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:00:18.068606 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:00:18.068757 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 00:00:18.071651 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:00:18.071815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:00:18.073002 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:00:18.074188 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 00:00:18.075884 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 00:00:18.077208 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 00:00:18.090116 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 00:00:18.092741 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 00:00:18.094457 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 00:00:18.095264 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:00:18.095292 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:00:18.097142 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 00:00:18.105354 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 00:00:18.107122 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 00:00:18.108184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:00:18.110497 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 00:00:18.112597 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 00:00:18.113797 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:00:18.114666 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 00:00:18.115811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:00:18.116761 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:00:18.118657 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 00:00:18.124242 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 00:00:18.127128 systemd-journald[1121]: Time spent on flushing to /var/log/journal/f801c37aa7dc4eda896afb9d2a3bd5f0 is 14.155ms for 869 entries. May 14 00:00:18.127128 systemd-journald[1121]: System Journal (/var/log/journal/f801c37aa7dc4eda896afb9d2a3bd5f0) is 8M, max 195.6M, 187.6M free. May 14 00:00:18.157239 systemd-journald[1121]: Received client request to flush runtime journal. May 14 00:00:18.157289 kernel: loop0: detected capacity change from 0 to 194096 May 14 00:00:18.128468 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:00:18.132410 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 00:00:18.133568 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 00:00:18.138098 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 00:00:18.141694 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 00:00:18.145287 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 00:00:18.152068 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 00:00:18.157500 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 00:00:18.159592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:18.161113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 00:00:18.169141 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 00:00:18.172385 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:00:18.175651 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:00:18.183607 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 00:00:18.199491 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 00:00:18.209472 kernel: loop1: detected capacity change from 0 to 126448 May 14 00:00:18.213201 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 14 00:00:18.213215 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 14 00:00:18.218488 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:00:18.252524 kernel: loop2: detected capacity change from 0 to 103832 May 14 00:00:18.305397 kernel: loop3: detected capacity change from 0 to 194096 May 14 00:00:18.313387 kernel: loop4: detected capacity change from 0 to 126448 May 14 00:00:18.318382 kernel: loop5: detected capacity change from 0 to 103832 May 14 00:00:18.321396 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 00:00:18.321798 (sd-merge)[1187]: Merged extensions into '/usr'. May 14 00:00:18.326033 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... May 14 00:00:18.326260 systemd[1]: Reloading... May 14 00:00:18.385456 zram_generator::config[1215]: No configuration found. May 14 00:00:18.422383 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:00:18.476790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:18.525905 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:00:18.526246 systemd[1]: Reloading finished in 199 ms. May 14 00:00:18.551076 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 00:00:18.552383 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 00:00:18.566635 systemd[1]: Starting ensure-sysext.service... May 14 00:00:18.568309 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:00:18.581949 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... May 14 00:00:18.581966 systemd[1]: Reloading... May 14 00:00:18.587873 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:00:18.588421 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:00:18.589138 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:00:18.589466 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 14 00:00:18.589578 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 14 00:00:18.593002 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:00:18.593109 systemd-tmpfiles[1250]: Skipping /boot May 14 00:00:18.601886 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:00:18.601979 systemd-tmpfiles[1250]: Skipping /boot May 14 00:00:18.637418 zram_generator::config[1279]: No configuration found. May 14 00:00:18.717126 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:18.766721 systemd[1]: Reloading finished in 184 ms. May 14 00:00:18.778218 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 00:00:18.785420 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:00:18.796349 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:00:18.798676 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 00:00:18.800988 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 00:00:18.803963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:00:18.808048 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:00:18.812786 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 00:00:18.830126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:00:18.834015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:00:18.837003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:00:18.842252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:00:18.844008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:00:18.844201 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:00:18.848636 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 00:00:18.851280 systemd-udevd[1320]: Using default interface naming scheme 'v255'. May 14 00:00:18.855401 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 00:00:18.857312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:00:18.858095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:00:18.860195 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:00:18.860632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:00:18.863586 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:00:18.864291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:00:18.870623 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 00:00:18.880056 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:00:18.884628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:00:18.885649 augenrules[1349]: No rules May 14 00:00:18.889609 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:00:18.892083 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:00:18.895562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:00:18.895687 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:00:18.900117 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 00:00:18.902321 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:00:18.904300 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:00:18.905653 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:00:18.906936 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 00:00:18.908627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:00:18.908808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:00:18.910612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:00:18.910761 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:00:18.913737 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 00:00:18.915578 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:00:18.915847 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:00:18.922467 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 00:00:18.935189 systemd[1]: Finished ensure-sysext.service. May 14 00:00:18.945355 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:00:18.947556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:00:18.950566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:00:18.954813 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:00:18.959219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:00:18.966503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:00:18.968962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:00:18.969011 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:00:18.970672 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:00:18.973100 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 00:00:18.974474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:00:18.977547 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1364) May 14 00:00:18.993343 augenrules[1389]: /sbin/augenrules: No change May 14 00:00:18.994250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:00:18.996571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:00:18.999715 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:00:18.999904 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:00:19.001004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:00:19.001157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:00:19.002480 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:00:19.002640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:00:19.009003 augenrules[1422]: No rules May 14 00:00:19.011948 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:00:19.012144 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:00:19.013341 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 00:00:19.025989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:00:19.026061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:00:19.030998 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:00:19.038542 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 00:00:19.057709 systemd-resolved[1319]: Positive Trust Anchors: May 14 00:00:19.064201 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:00:19.064290 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:00:19.072173 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 00:00:19.076530 systemd-resolved[1319]: Defaulting to hostname 'linux'. May 14 00:00:19.082975 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:00:19.083985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:00:19.102396 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 00:00:19.104505 systemd[1]: Reached target time-set.target - System Time Set. May 14 00:00:19.107824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:19.125292 systemd-networkd[1398]: lo: Link UP May 14 00:00:19.125706 systemd-networkd[1398]: lo: Gained carrier May 14 00:00:19.126643 systemd-networkd[1398]: Enumeration completed May 14 00:00:19.127420 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 00:00:19.128622 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:00:19.129611 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:19.129702 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:00:19.129808 systemd[1]: Reached target network.target - Network. May 14 00:00:19.130350 systemd-networkd[1398]: eth0: Link UP May 14 00:00:19.130443 systemd-networkd[1398]: eth0: Gained carrier May 14 00:00:19.130499 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:19.132254 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 00:00:19.134375 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 00:00:19.136241 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 00:00:19.162933 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:00:19.169434 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:00:19.171076 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. May 14 00:00:19.171599 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 00:00:19.171861 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:00:19.171909 systemd-timesyncd[1404]: Initial clock synchronization to Wed 2025-05-14 00:00:19.528970 UTC. May 14 00:00:19.183403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:19.196890 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 00:00:19.198183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:00:19.200536 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:00:19.201411 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 00:00:19.202311 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 00:00:19.203670 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 00:00:19.204584 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 00:00:19.205516 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 00:00:19.206393 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:00:19.206425 systemd[1]: Reached target paths.target - Path Units. May 14 00:00:19.207058 systemd[1]: Reached target timers.target - Timer Units. May 14 00:00:19.208744 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 00:00:19.210884 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 00:00:19.213758 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 00:00:19.215130 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 00:00:19.216411 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 00:00:19.219444 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 00:00:19.220830 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 00:00:19.223121 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 00:00:19.224730 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 00:00:19.225755 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:00:19.226656 systemd[1]: Reached target basic.target - Basic System. May 14 00:00:19.227556 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 00:00:19.227588 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 00:00:19.228445 systemd[1]: Starting containerd.service - containerd container runtime... May 14 00:00:19.230124 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 00:00:19.230947 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:00:19.234486 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 00:00:19.236195 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 00:00:19.237593 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 00:00:19.238671 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 00:00:19.242603 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 00:00:19.246717 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 00:00:19.252425 jq[1453]: false May 14 00:00:19.251907 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 00:00:19.255489 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 00:00:19.257632 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:00:19.258305 extend-filesystems[1454]: Found loop3 May 14 00:00:19.261176 extend-filesystems[1454]: Found loop4 May 14 00:00:19.261176 extend-filesystems[1454]: Found loop5 May 14 00:00:19.261176 extend-filesystems[1454]: Found vda May 14 00:00:19.261176 extend-filesystems[1454]: Found vda1 May 14 00:00:19.261176 extend-filesystems[1454]: Found vda2 May 14 00:00:19.261176 extend-filesystems[1454]: Found vda3 May 14 00:00:19.261176 extend-filesystems[1454]: Found usr May 14 00:00:19.261176 extend-filesystems[1454]: Found vda4 May 14 00:00:19.261176 extend-filesystems[1454]: Found vda6 May 14 00:00:19.261176 extend-filesystems[1454]: Found vda7 May 14 00:00:19.261176 extend-filesystems[1454]: Found vda9 May 14 00:00:19.261176 extend-filesystems[1454]: Checking size of /dev/vda9 May 14 00:00:19.260144 dbus-daemon[1452]: [system] SELinux support is enabled May 14 00:00:19.259635 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:00:19.260572 systemd[1]: Starting update-engine.service - Update Engine... May 14 00:00:19.266460 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 00:00:19.268313 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 00:00:19.271352 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 00:00:19.273770 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:00:19.276725 jq[1469]: true May 14 00:00:19.273939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 00:00:19.274230 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:00:19.278342 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 00:00:19.280550 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:00:19.280735 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 00:00:19.291437 extend-filesystems[1454]: Resized partition /dev/vda9 May 14 00:00:19.297439 extend-filesystems[1484]: resize2fs 1.47.2 (1-Jan-2025) May 14 00:00:19.315107 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:00:19.303310 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 00:00:19.304786 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:00:19.315886 jq[1477]: true May 14 00:00:19.304813 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 00:00:19.316039 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:00:19.316059 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 00:00:19.323666 update_engine[1467]: I20250514 00:00:19.323457 1467 main.cc:92] Flatcar Update Engine starting May 14 00:00:19.326970 tar[1475]: linux-arm64/helm May 14 00:00:19.328407 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1371) May 14 00:00:19.330250 update_engine[1467]: I20250514 00:00:19.330091 1467 update_check_scheduler.cc:74] Next update check in 6m59s May 14 00:00:19.330373 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:00:19.332983 systemd[1]: Started update-engine.service - Update Engine. May 14 00:00:19.338180 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 00:00:19.345398 extend-filesystems[1484]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:00:19.345398 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:00:19.345398 extend-filesystems[1484]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:00:19.350667 extend-filesystems[1454]: Resized filesystem in /dev/vda9 May 14 00:00:19.355003 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:00:19.355212 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 00:00:19.366448 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:00:19.366714 systemd-logind[1465]: New seat seat0. May 14 00:00:19.370492 systemd[1]: Started systemd-logind.service - User Login Management. May 14 00:00:19.415603 bash[1509]: Updated "/home/core/.ssh/authorized_keys" May 14 00:00:19.418784 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 00:00:19.420474 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 00:00:19.421017 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:00:19.546719 containerd[1478]: time="2025-05-14T00:00:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 00:00:19.547359 containerd[1478]: time="2025-05-14T00:00:19.547299320Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 00:00:19.559494 containerd[1478]: time="2025-05-14T00:00:19.559438680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.28µs" May 14 00:00:19.559494 containerd[1478]: time="2025-05-14T00:00:19.559484600Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 00:00:19.559575 containerd[1478]: time="2025-05-14T00:00:19.559505760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 00:00:19.559696 containerd[1478]: time="2025-05-14T00:00:19.559662840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 00:00:19.559741 containerd[1478]: time="2025-05-14T00:00:19.559693160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 00:00:19.559741 containerd[1478]: time="2025-05-14T00:00:19.559722920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:00:19.559793 containerd[1478]: time="2025-05-14T00:00:19.559776120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:00:19.559814 containerd[1478]: time="2025-05-14T00:00:19.559796320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:00:19.560135 containerd[1478]: time="2025-05-14T00:00:19.560098440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:00:19.560135 containerd[1478]: time="2025-05-14T00:00:19.560127640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:00:19.560179 containerd[1478]: time="2025-05-14T00:00:19.560143320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:00:19.560179 containerd[1478]: time="2025-05-14T00:00:19.560156680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 00:00:19.560260 containerd[1478]: time="2025-05-14T00:00:19.560234720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 00:00:19.560535 containerd[1478]: time="2025-05-14T00:00:19.560509640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:00:19.560623 containerd[1478]: time="2025-05-14T00:00:19.560605040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:00:19.560623 containerd[1478]: time="2025-05-14T00:00:19.560622280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 00:00:19.560670 containerd[1478]: time="2025-05-14T00:00:19.560647040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 00:00:19.561975 containerd[1478]: time="2025-05-14T00:00:19.561943960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 00:00:19.562069 containerd[1478]: time="2025-05-14T00:00:19.562039200Z" level=info msg="metadata content store policy set" policy=shared May 14 00:00:19.565124 containerd[1478]: time="2025-05-14T00:00:19.565088480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 00:00:19.565178 containerd[1478]: time="2025-05-14T00:00:19.565142320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 00:00:19.565178 containerd[1478]: time="2025-05-14T00:00:19.565156960Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 00:00:19.565178 containerd[1478]: time="2025-05-14T00:00:19.565168640Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 00:00:19.565253 containerd[1478]: time="2025-05-14T00:00:19.565184440Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 00:00:19.565253 containerd[1478]: time="2025-05-14T00:00:19.565197000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 00:00:19.565253 containerd[1478]: time="2025-05-14T00:00:19.565209040Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 00:00:19.565253 containerd[1478]: time="2025-05-14T00:00:19.565224640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 00:00:19.565253 containerd[1478]: time="2025-05-14T00:00:19.565235600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 00:00:19.565253 containerd[1478]: time="2025-05-14T00:00:19.565245520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 00:00:19.565253 containerd[1478]: time="2025-05-14T00:00:19.565255320Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 00:00:19.565388 containerd[1478]: time="2025-05-14T00:00:19.565267040Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 00:00:19.565481 containerd[1478]: time="2025-05-14T00:00:19.565410720Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 00:00:19.565481 containerd[1478]: time="2025-05-14T00:00:19.565438720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 00:00:19.565481 containerd[1478]: time="2025-05-14T00:00:19.565460360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 00:00:19.565481 containerd[1478]: time="2025-05-14T00:00:19.565472400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 00:00:19.565481 containerd[1478]: time="2025-05-14T00:00:19.565483760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 00:00:19.565628 containerd[1478]: time="2025-05-14T00:00:19.565493880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 00:00:19.565628 containerd[1478]: time="2025-05-14T00:00:19.565506360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 00:00:19.565628 containerd[1478]: time="2025-05-14T00:00:19.565520760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 00:00:19.565628 containerd[1478]: time="2025-05-14T00:00:19.565532280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 00:00:19.565628 containerd[1478]: time="2025-05-14T00:00:19.565543640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 00:00:19.565628 containerd[1478]: time="2025-05-14T00:00:19.565553480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 00:00:19.565842 containerd[1478]: time="2025-05-14T00:00:19.565804400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 00:00:19.565842 containerd[1478]: time="2025-05-14T00:00:19.565824680Z" level=info msg="Start snapshots syncer" May 14 00:00:19.565887 containerd[1478]: time="2025-05-14T00:00:19.565847160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 00:00:19.566108 containerd[1478]: time="2025-05-14T00:00:19.566058920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 00:00:19.566212 containerd[1478]: time="2025-05-14T00:00:19.566109520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 00:00:19.566212 containerd[1478]: time="2025-05-14T00:00:19.566174240Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 00:00:19.566294 containerd[1478]: time="2025-05-14T00:00:19.566272480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 00:00:19.566321 containerd[1478]: time="2025-05-14T00:00:19.566303320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 00:00:19.566321 containerd[1478]: time="2025-05-14T00:00:19.566316120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 00:00:19.566386 containerd[1478]: time="2025-05-14T00:00:19.566335600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 00:00:19.566386 containerd[1478]: time="2025-05-14T00:00:19.566350640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 00:00:19.566386 containerd[1478]: time="2025-05-14T00:00:19.566382840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 00:00:19.566451 containerd[1478]: time="2025-05-14T00:00:19.566399480Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 00:00:19.566451 containerd[1478]: time="2025-05-14T00:00:19.566431200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 00:00:19.566451 containerd[1478]: time="2025-05-14T00:00:19.566443640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 00:00:19.566498 containerd[1478]: time="2025-05-14T00:00:19.566452960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 00:00:19.566498 containerd[1478]: time="2025-05-14T00:00:19.566487000Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:00:19.566536 containerd[1478]: time="2025-05-14T00:00:19.566500160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:00:19.566536 containerd[1478]: time="2025-05-14T00:00:19.566509600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:00:19.566536 containerd[1478]: time="2025-05-14T00:00:19.566519560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:00:19.566536 containerd[1478]: time="2025-05-14T00:00:19.566527640Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 00:00:19.566601 containerd[1478]: time="2025-05-14T00:00:19.566536480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 00:00:19.566601 containerd[1478]: time="2025-05-14T00:00:19.566547640Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 00:00:19.566637 containerd[1478]: time="2025-05-14T00:00:19.566622920Z" level=info msg="runtime interface created" May 14 00:00:19.566637 containerd[1478]: time="2025-05-14T00:00:19.566628880Z" level=info msg="created NRI interface" May 14 00:00:19.566670 containerd[1478]: time="2025-05-14T00:00:19.566636920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 00:00:19.566670 containerd[1478]: time="2025-05-14T00:00:19.566648000Z" level=info msg="Connect containerd service" May 14 00:00:19.566703 containerd[1478]: time="2025-05-14T00:00:19.566682080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 00:00:19.567303 containerd[1478]: time="2025-05-14T00:00:19.567277320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:00:19.668307 containerd[1478]: time="2025-05-14T00:00:19.668235840Z" level=info msg="Start subscribing containerd event" May 14 00:00:19.668307 containerd[1478]: time="2025-05-14T00:00:19.668302160Z" level=info msg="Start recovering state" May 14 00:00:19.668438 containerd[1478]: time="2025-05-14T00:00:19.668405760Z" level=info msg="Start event monitor" May 14 00:00:19.668438 containerd[1478]: time="2025-05-14T00:00:19.668421360Z" level=info msg="Start cni network conf syncer for default" May 14 00:00:19.668438 containerd[1478]: time="2025-05-14T00:00:19.668429960Z" level=info msg="Start streaming server" May 14 00:00:19.668438 containerd[1478]: time="2025-05-14T00:00:19.668437520Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 00:00:19.668568 containerd[1478]: time="2025-05-14T00:00:19.668444760Z" level=info msg="runtime interface starting up..." May 14 00:00:19.668568 containerd[1478]: time="2025-05-14T00:00:19.668450320Z" level=info msg="starting plugins..." May 14 00:00:19.668568 containerd[1478]: time="2025-05-14T00:00:19.668463280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 00:00:19.668568 containerd[1478]: time="2025-05-14T00:00:19.668269400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:00:19.670436 containerd[1478]: time="2025-05-14T00:00:19.668586200Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:00:19.670436 containerd[1478]: time="2025-05-14T00:00:19.668640280Z" level=info msg="containerd successfully booted in 0.122539s" May 14 00:00:19.668753 systemd[1]: Started containerd.service - containerd container runtime. May 14 00:00:19.689093 tar[1475]: linux-arm64/LICENSE May 14 00:00:19.689275 tar[1475]: linux-arm64/README.md May 14 00:00:19.704503 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 00:00:20.489310 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:00:20.509323 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 00:00:20.512234 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 00:00:20.528101 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:00:20.528338 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 00:00:20.531035 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 00:00:20.551480 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 00:00:20.554898 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 00:00:20.557432 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 00:00:20.558559 systemd[1]: Reached target getty.target - Login Prompts. May 14 00:00:20.932580 systemd-networkd[1398]: eth0: Gained IPv6LL May 14 00:00:20.935159 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 00:00:20.936824 systemd[1]: Reached target network-online.target - Network is Online. May 14 00:00:20.939097 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 00:00:20.941292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:20.943267 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 00:00:20.965748 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 00:00:20.967375 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 00:00:20.968531 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 00:00:20.970918 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 00:00:21.496207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:21.497835 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 00:00:21.500423 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:00:21.504331 systemd[1]: Startup finished in 599ms (kernel) + 4.635s (initrd) + 4.201s (userspace) = 9.435s. May 14 00:00:22.012275 kubelet[1582]: E0514 00:00:22.012201 1582 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:00:22.014983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:00:22.015162 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:00:22.015506 systemd[1]: kubelet.service: Consumed 814ms CPU time, 240.9M memory peak. May 14 00:00:25.646052 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 00:00:25.647244 systemd[1]: Started sshd@0-10.0.0.146:22-10.0.0.1:46360.service - OpenSSH per-connection server daemon (10.0.0.1:46360). May 14 00:00:25.731472 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 46360 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:00:25.733892 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:25.742521 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 00:00:25.743588 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 00:00:25.749256 systemd-logind[1465]: New session 1 of user core. May 14 00:00:25.766583 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 00:00:25.769381 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 00:00:25.787587 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:00:25.789803 systemd-logind[1465]: New session c1 of user core. May 14 00:00:25.907043 systemd[1602]: Queued start job for default target default.target. May 14 00:00:25.918476 systemd[1602]: Created slice app.slice - User Application Slice. May 14 00:00:25.918510 systemd[1602]: Reached target paths.target - Paths. May 14 00:00:25.918550 systemd[1602]: Reached target timers.target - Timers. May 14 00:00:25.919881 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 00:00:25.929430 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 00:00:25.929499 systemd[1602]: Reached target sockets.target - Sockets. May 14 00:00:25.929540 systemd[1602]: Reached target basic.target - Basic System. May 14 00:00:25.929569 systemd[1602]: Reached target default.target - Main User Target. May 14 00:00:25.929597 systemd[1602]: Startup finished in 133ms. May 14 00:00:25.929776 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 00:00:25.931547 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 00:00:25.991851 systemd[1]: Started sshd@1-10.0.0.146:22-10.0.0.1:46376.service - OpenSSH per-connection server daemon (10.0.0.1:46376). May 14 00:00:26.049053 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 46376 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:00:26.050309 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:26.054667 systemd-logind[1465]: New session 2 of user core. May 14 00:00:26.065597 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 00:00:26.119501 sshd[1615]: Connection closed by 10.0.0.1 port 46376 May 14 00:00:26.119261 sshd-session[1613]: pam_unix(sshd:session): session closed for user core May 14 00:00:26.133080 systemd[1]: sshd@1-10.0.0.146:22-10.0.0.1:46376.service: Deactivated successfully. May 14 00:00:26.135300 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:00:26.138267 systemd-logind[1465]: Session 2 logged out. Waiting for processes to exit. May 14 00:00:26.140267 systemd[1]: Started sshd@2-10.0.0.146:22-10.0.0.1:46378.service - OpenSSH per-connection server daemon (10.0.0.1:46378). May 14 00:00:26.141992 systemd-logind[1465]: Removed session 2. May 14 00:00:26.191583 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 46378 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:00:26.192873 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:26.198440 systemd-logind[1465]: New session 3 of user core. May 14 00:00:26.209580 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 00:00:26.261640 sshd[1623]: Connection closed by 10.0.0.1 port 46378 May 14 00:00:26.261824 sshd-session[1620]: pam_unix(sshd:session): session closed for user core May 14 00:00:26.273938 systemd[1]: sshd@2-10.0.0.146:22-10.0.0.1:46378.service: Deactivated successfully. May 14 00:00:26.275742 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:00:26.277081 systemd-logind[1465]: Session 3 logged out. Waiting for processes to exit. May 14 00:00:26.278435 systemd[1]: Started sshd@3-10.0.0.146:22-10.0.0.1:46382.service - OpenSSH per-connection server daemon (10.0.0.1:46382). May 14 00:00:26.279238 systemd-logind[1465]: Removed session 3. May 14 00:00:26.332849 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 46382 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:00:26.334192 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:26.338460 systemd-logind[1465]: New session 4 of user core. May 14 00:00:26.348544 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 00:00:26.404418 sshd[1631]: Connection closed by 10.0.0.1 port 46382 May 14 00:00:26.405020 sshd-session[1628]: pam_unix(sshd:session): session closed for user core May 14 00:00:26.416568 systemd[1]: sshd@3-10.0.0.146:22-10.0.0.1:46382.service: Deactivated successfully. May 14 00:00:26.418590 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:00:26.420421 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. May 14 00:00:26.422469 systemd[1]: Started sshd@4-10.0.0.146:22-10.0.0.1:46384.service - OpenSSH per-connection server daemon (10.0.0.1:46384). May 14 00:00:26.423959 systemd-logind[1465]: Removed session 4. May 14 00:00:26.478682 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 46384 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:00:26.479973 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:26.484982 systemd-logind[1465]: New session 5 of user core. May 14 00:00:26.498582 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 00:00:26.562082 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 00:00:26.562437 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:00:26.577415 sudo[1640]: pam_unix(sudo:session): session closed for user root May 14 00:00:26.579641 sshd[1639]: Connection closed by 10.0.0.1 port 46384 May 14 00:00:26.579422 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 14 00:00:26.597880 systemd[1]: sshd@4-10.0.0.146:22-10.0.0.1:46384.service: Deactivated successfully. May 14 00:00:26.599340 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:00:26.600454 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. May 14 00:00:26.601950 systemd[1]: Started sshd@5-10.0.0.146:22-10.0.0.1:46392.service - OpenSSH per-connection server daemon (10.0.0.1:46392). May 14 00:00:26.602740 systemd-logind[1465]: Removed session 5. May 14 00:00:26.656218 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 46392 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:00:26.657672 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:26.661847 systemd-logind[1465]: New session 6 of user core. May 14 00:00:26.672580 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 00:00:26.727272 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 00:00:26.727565 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:00:26.731096 sudo[1650]: pam_unix(sudo:session): session closed for user root May 14 00:00:26.736275 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 00:00:26.736624 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:00:26.746634 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:00:26.785435 augenrules[1672]: No rules May 14 00:00:26.786798 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:00:26.788440 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:00:26.789508 sudo[1649]: pam_unix(sudo:session): session closed for user root May 14 00:00:26.791034 sshd[1648]: Connection closed by 10.0.0.1 port 46392 May 14 00:00:26.791547 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 14 00:00:26.802640 systemd[1]: sshd@5-10.0.0.146:22-10.0.0.1:46392.service: Deactivated successfully. May 14 00:00:26.806074 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:00:26.807394 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. May 14 00:00:26.808572 systemd[1]: Started sshd@6-10.0.0.146:22-10.0.0.1:46406.service - OpenSSH per-connection server daemon (10.0.0.1:46406). May 14 00:00:26.809395 systemd-logind[1465]: Removed session 6. May 14 00:00:26.870394 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 46406 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:00:26.871759 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:26.881242 systemd-logind[1465]: New session 7 of user core. May 14 00:00:26.891590 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 00:00:26.944638 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:00:26.944911 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:00:27.299294 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 00:00:27.312654 (dockerd)[1706]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:00:27.581954 dockerd[1706]: time="2025-05-14T00:00:27.581814251Z" level=info msg="Starting up" May 14 00:00:27.584195 dockerd[1706]: time="2025-05-14T00:00:27.584160439Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 00:00:27.739069 dockerd[1706]: time="2025-05-14T00:00:27.738897355Z" level=info msg="Loading containers: start." May 14 00:00:27.896412 kernel: Initializing XFRM netlink socket May 14 00:00:27.963712 systemd-networkd[1398]: docker0: Link UP May 14 00:00:28.026650 dockerd[1706]: time="2025-05-14T00:00:28.026581239Z" level=info msg="Loading containers: done." May 14 00:00:28.042022 dockerd[1706]: time="2025-05-14T00:00:28.041960498Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:00:28.042165 dockerd[1706]: time="2025-05-14T00:00:28.042058296Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 00:00:28.042268 dockerd[1706]: time="2025-05-14T00:00:28.042237524Z" level=info msg="Daemon has completed initialization" May 14 00:00:28.074028 dockerd[1706]: time="2025-05-14T00:00:28.073964151Z" level=info msg="API listen on /run/docker.sock" May 14 00:00:28.074134 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:00:28.722679 containerd[1478]: time="2025-05-14T00:00:28.722631910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 00:00:29.284555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290701835.mount: Deactivated successfully. May 14 00:00:30.242183 containerd[1478]: time="2025-05-14T00:00:30.242121201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:30.242570 containerd[1478]: time="2025-05-14T00:00:30.242505422Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 14 00:00:30.243534 containerd[1478]: time="2025-05-14T00:00:30.243500697Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:30.246317 containerd[1478]: time="2025-05-14T00:00:30.246281663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:30.247212 containerd[1478]: time="2025-05-14T00:00:30.247171190Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.524498128s" May 14 00:00:30.247258 containerd[1478]: time="2025-05-14T00:00:30.247213845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 00:00:30.262950 containerd[1478]: time="2025-05-14T00:00:30.262873366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 00:00:31.543411 containerd[1478]: time="2025-05-14T00:00:31.542930136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:31.543765 containerd[1478]: time="2025-05-14T00:00:31.543477221Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 14 00:00:31.544337 containerd[1478]: time="2025-05-14T00:00:31.544306377Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:31.546880 containerd[1478]: time="2025-05-14T00:00:31.546826494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:31.547945 containerd[1478]: time="2025-05-14T00:00:31.547693434Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.284742094s" May 14 00:00:31.547945 containerd[1478]: time="2025-05-14T00:00:31.547728389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 00:00:31.563118 containerd[1478]: time="2025-05-14T00:00:31.563091401Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 00:00:32.266750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:00:32.268543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:32.397833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:32.406664 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:00:32.501181 containerd[1478]: time="2025-05-14T00:00:32.500988412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:32.501478 containerd[1478]: time="2025-05-14T00:00:32.501419293Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 14 00:00:32.502919 containerd[1478]: time="2025-05-14T00:00:32.502170710Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:32.505944 containerd[1478]: time="2025-05-14T00:00:32.505736854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:32.506803 containerd[1478]: time="2025-05-14T00:00:32.506770992Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 943.648849ms" May 14 00:00:32.506871 containerd[1478]: time="2025-05-14T00:00:32.506805217Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 00:00:32.511715 kubelet[2015]: E0514 00:00:32.511678 2015 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:00:32.515001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:00:32.515240 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:00:32.516486 systemd[1]: kubelet.service: Consumed 140ms CPU time, 99.3M memory peak. May 14 00:00:32.522481 containerd[1478]: time="2025-05-14T00:00:32.522356463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 00:00:33.418464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534282640.mount: Deactivated successfully. May 14 00:00:33.617529 containerd[1478]: time="2025-05-14T00:00:33.617485133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:33.618070 containerd[1478]: time="2025-05-14T00:00:33.618013539Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 14 00:00:33.621277 containerd[1478]: time="2025-05-14T00:00:33.621240775Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:33.623260 containerd[1478]: time="2025-05-14T00:00:33.623213578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:33.623747 containerd[1478]: time="2025-05-14T00:00:33.623720697Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.101310324s" May 14 00:00:33.623805 containerd[1478]: time="2025-05-14T00:00:33.623749119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 00:00:33.639141 containerd[1478]: time="2025-05-14T00:00:33.639093439Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:00:34.121499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965093988.mount: Deactivated successfully. May 14 00:00:34.813475 containerd[1478]: time="2025-05-14T00:00:34.813191265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:34.814319 containerd[1478]: time="2025-05-14T00:00:34.814129078Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 00:00:34.815087 containerd[1478]: time="2025-05-14T00:00:34.815058835Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:34.818422 containerd[1478]: time="2025-05-14T00:00:34.818342006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:34.819413 containerd[1478]: time="2025-05-14T00:00:34.819276235Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.18013237s" May 14 00:00:34.819413 containerd[1478]: time="2025-05-14T00:00:34.819312322Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 00:00:34.834771 containerd[1478]: time="2025-05-14T00:00:34.834733777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 00:00:35.292202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731948290.mount: Deactivated successfully. May 14 00:00:35.296412 containerd[1478]: time="2025-05-14T00:00:35.296038193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:35.297370 containerd[1478]: time="2025-05-14T00:00:35.297307998Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 14 00:00:35.298421 containerd[1478]: time="2025-05-14T00:00:35.298362352Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:35.300452 containerd[1478]: time="2025-05-14T00:00:35.300399190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:35.301646 containerd[1478]: time="2025-05-14T00:00:35.301611731Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 466.826056ms" May 14 00:00:35.301702 containerd[1478]: time="2025-05-14T00:00:35.301646338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 00:00:35.317117 containerd[1478]: time="2025-05-14T00:00:35.317079963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 00:00:35.824683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056944365.mount: Deactivated successfully. May 14 00:00:37.026492 containerd[1478]: time="2025-05-14T00:00:37.026432948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:37.026869 containerd[1478]: time="2025-05-14T00:00:37.026810803Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 14 00:00:37.027850 containerd[1478]: time="2025-05-14T00:00:37.027814853Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:37.030455 containerd[1478]: time="2025-05-14T00:00:37.030422305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:37.031560 containerd[1478]: time="2025-05-14T00:00:37.031532884Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.714416031s" May 14 00:00:37.031615 containerd[1478]: time="2025-05-14T00:00:37.031562018Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 00:00:41.963010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:41.963150 systemd[1]: kubelet.service: Consumed 140ms CPU time, 99.3M memory peak. May 14 00:00:41.965757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:41.985544 systemd[1]: Reload requested from client PID 2259 ('systemctl') (unit session-7.scope)... May 14 00:00:41.985562 systemd[1]: Reloading... May 14 00:00:42.064546 zram_generator::config[2302]: No configuration found. May 14 00:00:42.153484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:42.227830 systemd[1]: Reloading finished in 241 ms. May 14 00:00:42.281246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:42.283676 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:42.286226 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:00:42.286507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:42.286556 systemd[1]: kubelet.service: Consumed 90ms CPU time, 82.5M memory peak. May 14 00:00:42.288137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:42.416329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:42.420358 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:00:42.465449 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:42.465449 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:00:42.465449 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:42.465809 kubelet[2349]: I0514 00:00:42.465603 2349 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:00:43.767289 kubelet[2349]: I0514 00:00:43.767225 2349 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:00:43.767289 kubelet[2349]: I0514 00:00:43.767264 2349 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:00:43.767676 kubelet[2349]: I0514 00:00:43.767540 2349 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:00:43.798189 kubelet[2349]: I0514 00:00:43.798153 2349 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:43.798486 kubelet[2349]: E0514 00:00:43.798456 2349 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.806971 kubelet[2349]: I0514 00:00:43.806943 2349 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:00:43.808186 kubelet[2349]: I0514 00:00:43.808134 2349 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:00:43.808364 kubelet[2349]: I0514 00:00:43.808182 2349 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:00:43.808458 kubelet[2349]: I0514 00:00:43.808443 2349 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:00:43.808458 kubelet[2349]: I0514 00:00:43.808453 2349 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:00:43.808749 kubelet[2349]: I0514 00:00:43.808718 2349 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:43.811548 kubelet[2349]: I0514 00:00:43.811519 2349 kubelet.go:400] "Attempting to sync node with API server" May 14 00:00:43.811548 kubelet[2349]: I0514 00:00:43.811546 2349 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:00:43.811859 kubelet[2349]: I0514 00:00:43.811837 2349 kubelet.go:312] "Adding apiserver pod source" May 14 00:00:43.812001 kubelet[2349]: I0514 00:00:43.811983 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:00:43.813144 kubelet[2349]: I0514 00:00:43.813100 2349 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:00:43.813144 kubelet[2349]: W0514 00:00:43.813106 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.813243 kubelet[2349]: E0514 00:00:43.813156 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.813243 kubelet[2349]: W0514 00:00:43.813209 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.813243 kubelet[2349]: E0514 00:00:43.813232 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.813520 kubelet[2349]: I0514 00:00:43.813505 2349 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:00:43.813631 kubelet[2349]: W0514 00:00:43.813618 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:00:43.814418 kubelet[2349]: I0514 00:00:43.814404 2349 server.go:1264] "Started kubelet" May 14 00:00:43.819218 kubelet[2349]: I0514 00:00:43.819183 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:00:43.820999 kubelet[2349]: E0514 00:00:43.820796 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.146:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3bb871ec5390 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:00:43.814376336 +0000 UTC m=+1.391018164,LastTimestamp:2025-05-14 00:00:43.814376336 +0000 UTC m=+1.391018164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:00:43.822583 kubelet[2349]: I0514 00:00:43.821198 2349 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:00:43.822583 kubelet[2349]: I0514 00:00:43.822464 2349 server.go:455] "Adding debug handlers to kubelet server" May 14 00:00:43.823085 kubelet[2349]: I0514 00:00:43.823059 2349 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:00:43.823194 kubelet[2349]: I0514 00:00:43.823178 2349 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:00:43.823710 kubelet[2349]: I0514 00:00:43.823648 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:00:43.823885 kubelet[2349]: I0514 00:00:43.823862 2349 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:00:43.824030 kubelet[2349]: I0514 00:00:43.824006 2349 reconciler.go:26] "Reconciler: start to sync state" May 14 00:00:43.824683 kubelet[2349]: W0514 00:00:43.824436 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.824748 kubelet[2349]: E0514 00:00:43.824703 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.824869 kubelet[2349]: E0514 00:00:43.824845 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="200ms" May 14 00:00:43.825607 kubelet[2349]: I0514 00:00:43.825350 2349 factory.go:221] Registration of the systemd container factory successfully May 14 00:00:43.825607 kubelet[2349]: I0514 00:00:43.825453 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:00:43.826841 kubelet[2349]: I0514 00:00:43.826817 2349 factory.go:221] Registration of the containerd container factory successfully May 14 00:00:43.827762 kubelet[2349]: E0514 00:00:43.827706 2349 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:00:43.836179 kubelet[2349]: I0514 00:00:43.836122 2349 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:00:43.836179 kubelet[2349]: I0514 00:00:43.836137 2349 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:00:43.836179 kubelet[2349]: I0514 00:00:43.836179 2349 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:43.839669 kubelet[2349]: I0514 00:00:43.839547 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:00:43.840835 kubelet[2349]: I0514 00:00:43.840804 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:00:43.841071 kubelet[2349]: I0514 00:00:43.840961 2349 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:00:43.841071 kubelet[2349]: I0514 00:00:43.840981 2349 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:00:43.841071 kubelet[2349]: E0514 00:00:43.841022 2349 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:00:43.841595 kubelet[2349]: W0514 00:00:43.841561 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.841909 kubelet[2349]: E0514 00:00:43.841756 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:43.892303 kubelet[2349]: I0514 00:00:43.892263 2349 policy_none.go:49] "None policy: Start" May 14 00:00:43.893207 kubelet[2349]: I0514 00:00:43.893144 2349 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:00:43.893207 kubelet[2349]: I0514 00:00:43.893173 2349 state_mem.go:35] "Initializing new in-memory state store" May 14 00:00:43.899063 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:00:43.913284 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:00:43.916089 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:00:43.924887 kubelet[2349]: I0514 00:00:43.924856 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:43.925216 kubelet[2349]: E0514 00:00:43.925180 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" May 14 00:00:43.928111 kubelet[2349]: I0514 00:00:43.928089 2349 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:00:43.928477 kubelet[2349]: I0514 00:00:43.928288 2349 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:00:43.928477 kubelet[2349]: I0514 00:00:43.928423 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:00:43.930231 kubelet[2349]: E0514 00:00:43.930208 2349 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:00:43.941614 kubelet[2349]: I0514 00:00:43.941560 2349 topology_manager.go:215] "Topology Admit Handler" podUID="dcbb89a08ffb7ce4271ae857ee92db54" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:00:43.942554 kubelet[2349]: I0514 00:00:43.942518 2349 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:00:43.944716 kubelet[2349]: I0514 00:00:43.944615 2349 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:00:43.950755 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 00:00:43.961066 systemd[1]: Created slice kubepods-burstable-poddcbb89a08ffb7ce4271ae857ee92db54.slice - libcontainer container kubepods-burstable-poddcbb89a08ffb7ce4271ae857ee92db54.slice. May 14 00:00:43.973316 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 00:00:44.025215 kubelet[2349]: I0514 00:00:44.024324 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcbb89a08ffb7ce4271ae857ee92db54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcbb89a08ffb7ce4271ae857ee92db54\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:44.025215 kubelet[2349]: I0514 00:00:44.024394 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcbb89a08ffb7ce4271ae857ee92db54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcbb89a08ffb7ce4271ae857ee92db54\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:44.025215 kubelet[2349]: I0514 00:00:44.024460 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:44.025215 kubelet[2349]: I0514 00:00:44.024480 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcbb89a08ffb7ce4271ae857ee92db54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dcbb89a08ffb7ce4271ae857ee92db54\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:44.025215 kubelet[2349]: I0514 00:00:44.024496 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:44.025427 kubelet[2349]: I0514 00:00:44.024510 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:44.025427 kubelet[2349]: I0514 00:00:44.024526 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:44.025427 kubelet[2349]: I0514 00:00:44.024553 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:44.025427 kubelet[2349]: I0514 00:00:44.024572 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:00:44.025427 kubelet[2349]: E0514 00:00:44.025289 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="400ms" May 14 00:00:44.126888 kubelet[2349]: I0514 00:00:44.126848 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:44.127186 kubelet[2349]: E0514 00:00:44.127162 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" May 14 00:00:44.259684 containerd[1478]: time="2025-05-14T00:00:44.259616805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 00:00:44.272389 containerd[1478]: time="2025-05-14T00:00:44.272283748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dcbb89a08ffb7ce4271ae857ee92db54,Namespace:kube-system,Attempt:0,}" May 14 00:00:44.276057 containerd[1478]: time="2025-05-14T00:00:44.275876243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 00:00:44.426268 kubelet[2349]: E0514 00:00:44.426203 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="800ms" May 14 00:00:44.528801 kubelet[2349]: I0514 00:00:44.528680 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:44.529082 kubelet[2349]: E0514 00:00:44.529038 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" May 14 00:00:44.813857 kubelet[2349]: W0514 00:00:44.813708 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:44.813857 kubelet[2349]: E0514 00:00:44.813751 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:44.870991 kubelet[2349]: W0514 00:00:44.870890 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:44.870991 kubelet[2349]: E0514 00:00:44.870957 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:44.907441 kubelet[2349]: W0514 00:00:44.907299 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:44.907441 kubelet[2349]: E0514 00:00:44.907392 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:44.970032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677657704.mount: Deactivated successfully. May 14 00:00:44.974996 containerd[1478]: time="2025-05-14T00:00:44.974626880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:44.977300 containerd[1478]: time="2025-05-14T00:00:44.977219968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 00:00:44.978164 containerd[1478]: time="2025-05-14T00:00:44.978135143Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:44.979540 containerd[1478]: time="2025-05-14T00:00:44.979481698Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:44.980144 containerd[1478]: time="2025-05-14T00:00:44.980032894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:00:44.980558 containerd[1478]: time="2025-05-14T00:00:44.980523902Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:44.981277 containerd[1478]: time="2025-05-14T00:00:44.981108158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:00:44.982982 containerd[1478]: time="2025-05-14T00:00:44.982945961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:44.983703 containerd[1478]: time="2025-05-14T00:00:44.983679087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 721.544129ms" May 14 00:00:44.984747 containerd[1478]: time="2025-05-14T00:00:44.984719648Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 710.637529ms" May 14 00:00:44.988325 containerd[1478]: time="2025-05-14T00:00:44.988103647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 710.225424ms" May 14 00:00:45.005546 containerd[1478]: time="2025-05-14T00:00:45.005436958Z" level=info msg="connecting to shim 168bf59b9cf2ae9258dcaccbc74274d83f029089d8ee1dd98b7bf384bf83dd04" address="unix:///run/containerd/s/b64e832e781d5b7f2eea7d4acc68182281cd2359f6bc9a0a2a4ed26c4558f3d5" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:45.008104 containerd[1478]: time="2025-05-14T00:00:45.007937395Z" level=info msg="connecting to shim 6a9b26e2ad87ee78f3c51ce8ad214ff5b52c0f0b5ff7026df644894dfbf067c9" address="unix:///run/containerd/s/e259dab0c1484260ce62a39abfa11afa20524a941e15d013e5b50a1bfeb6f6f6" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:45.014353 containerd[1478]: time="2025-05-14T00:00:45.014300944Z" level=info msg="connecting to shim f385d2a1476796573067ac2befae83bc44b935d7352655aa1a72936a097e9b3d" address="unix:///run/containerd/s/039f447b532d16572d26315b04ff3314efd3691784598f11ba6e74ff4153b354" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:45.044603 systemd[1]: Started cri-containerd-168bf59b9cf2ae9258dcaccbc74274d83f029089d8ee1dd98b7bf384bf83dd04.scope - libcontainer container 168bf59b9cf2ae9258dcaccbc74274d83f029089d8ee1dd98b7bf384bf83dd04. May 14 00:00:45.045956 systemd[1]: Started cri-containerd-6a9b26e2ad87ee78f3c51ce8ad214ff5b52c0f0b5ff7026df644894dfbf067c9.scope - libcontainer container 6a9b26e2ad87ee78f3c51ce8ad214ff5b52c0f0b5ff7026df644894dfbf067c9. May 14 00:00:45.050561 systemd[1]: Started cri-containerd-f385d2a1476796573067ac2befae83bc44b935d7352655aa1a72936a097e9b3d.scope - libcontainer container f385d2a1476796573067ac2befae83bc44b935d7352655aa1a72936a097e9b3d. May 14 00:00:45.084151 containerd[1478]: time="2025-05-14T00:00:45.083856050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"168bf59b9cf2ae9258dcaccbc74274d83f029089d8ee1dd98b7bf384bf83dd04\"" May 14 00:00:45.087844 containerd[1478]: time="2025-05-14T00:00:45.087803497Z" level=info msg="CreateContainer within sandbox \"168bf59b9cf2ae9258dcaccbc74274d83f029089d8ee1dd98b7bf384bf83dd04\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:00:45.089962 containerd[1478]: time="2025-05-14T00:00:45.089903901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dcbb89a08ffb7ce4271ae857ee92db54,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a9b26e2ad87ee78f3c51ce8ad214ff5b52c0f0b5ff7026df644894dfbf067c9\"" May 14 00:00:45.092793 containerd[1478]: time="2025-05-14T00:00:45.092758297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"f385d2a1476796573067ac2befae83bc44b935d7352655aa1a72936a097e9b3d\"" May 14 00:00:45.092855 containerd[1478]: time="2025-05-14T00:00:45.092768433Z" level=info msg="CreateContainer within sandbox \"6a9b26e2ad87ee78f3c51ce8ad214ff5b52c0f0b5ff7026df644894dfbf067c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:00:45.095615 containerd[1478]: time="2025-05-14T00:00:45.095530604Z" level=info msg="CreateContainer within sandbox \"f385d2a1476796573067ac2befae83bc44b935d7352655aa1a72936a097e9b3d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:00:45.103171 containerd[1478]: time="2025-05-14T00:00:45.103121937Z" level=info msg="Container 713a54707fc307fac7a6e2fb2bc9aa2752ea6a097cb232542e8fc657eb21ff65: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:45.104129 containerd[1478]: time="2025-05-14T00:00:45.104094316Z" level=info msg="Container d2c31d990d351a46fa8215325552025fbcdedfabea19a572f9862bc405765950: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:45.106623 containerd[1478]: time="2025-05-14T00:00:45.106571556Z" level=info msg="Container 5426041fce2b17034d1b9eaaf5b6f1224dd899807e5bfb150d768cf73f575afd: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:45.111921 containerd[1478]: time="2025-05-14T00:00:45.111875870Z" level=info msg="CreateContainer within sandbox \"6a9b26e2ad87ee78f3c51ce8ad214ff5b52c0f0b5ff7026df644894dfbf067c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d2c31d990d351a46fa8215325552025fbcdedfabea19a572f9862bc405765950\"" May 14 00:00:45.112640 containerd[1478]: time="2025-05-14T00:00:45.112612315Z" level=info msg="StartContainer for \"d2c31d990d351a46fa8215325552025fbcdedfabea19a572f9862bc405765950\"" May 14 00:00:45.113153 containerd[1478]: time="2025-05-14T00:00:45.112993078Z" level=info msg="CreateContainer within sandbox \"168bf59b9cf2ae9258dcaccbc74274d83f029089d8ee1dd98b7bf384bf83dd04\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"713a54707fc307fac7a6e2fb2bc9aa2752ea6a097cb232542e8fc657eb21ff65\"" May 14 00:00:45.113476 containerd[1478]: time="2025-05-14T00:00:45.113446195Z" level=info msg="StartContainer for \"713a54707fc307fac7a6e2fb2bc9aa2752ea6a097cb232542e8fc657eb21ff65\"" May 14 00:00:45.114219 containerd[1478]: time="2025-05-14T00:00:45.114187087Z" level=info msg="connecting to shim d2c31d990d351a46fa8215325552025fbcdedfabea19a572f9862bc405765950" address="unix:///run/containerd/s/e259dab0c1484260ce62a39abfa11afa20524a941e15d013e5b50a1bfeb6f6f6" protocol=ttrpc version=3 May 14 00:00:45.118076 containerd[1478]: time="2025-05-14T00:00:45.114697695Z" level=info msg="connecting to shim 713a54707fc307fac7a6e2fb2bc9aa2752ea6a097cb232542e8fc657eb21ff65" address="unix:///run/containerd/s/b64e832e781d5b7f2eea7d4acc68182281cd2359f6bc9a0a2a4ed26c4558f3d5" protocol=ttrpc version=3 May 14 00:00:45.119229 containerd[1478]: time="2025-05-14T00:00:45.119186037Z" level=info msg="CreateContainer within sandbox \"f385d2a1476796573067ac2befae83bc44b935d7352655aa1a72936a097e9b3d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5426041fce2b17034d1b9eaaf5b6f1224dd899807e5bfb150d768cf73f575afd\"" May 14 00:00:45.119714 containerd[1478]: time="2025-05-14T00:00:45.119686830Z" level=info msg="StartContainer for \"5426041fce2b17034d1b9eaaf5b6f1224dd899807e5bfb150d768cf73f575afd\"" May 14 00:00:45.121380 containerd[1478]: time="2025-05-14T00:00:45.120940894Z" level=info msg="connecting to shim 5426041fce2b17034d1b9eaaf5b6f1224dd899807e5bfb150d768cf73f575afd" address="unix:///run/containerd/s/039f447b532d16572d26315b04ff3314efd3691784598f11ba6e74ff4153b354" protocol=ttrpc version=3 May 14 00:00:45.131530 systemd[1]: Started cri-containerd-d2c31d990d351a46fa8215325552025fbcdedfabea19a572f9862bc405765950.scope - libcontainer container d2c31d990d351a46fa8215325552025fbcdedfabea19a572f9862bc405765950. May 14 00:00:45.134220 systemd[1]: Started cri-containerd-713a54707fc307fac7a6e2fb2bc9aa2752ea6a097cb232542e8fc657eb21ff65.scope - libcontainer container 713a54707fc307fac7a6e2fb2bc9aa2752ea6a097cb232542e8fc657eb21ff65. May 14 00:00:45.146521 systemd[1]: Started cri-containerd-5426041fce2b17034d1b9eaaf5b6f1224dd899807e5bfb150d768cf73f575afd.scope - libcontainer container 5426041fce2b17034d1b9eaaf5b6f1224dd899807e5bfb150d768cf73f575afd. May 14 00:00:45.181085 containerd[1478]: time="2025-05-14T00:00:45.179205854Z" level=info msg="StartContainer for \"d2c31d990d351a46fa8215325552025fbcdedfabea19a572f9862bc405765950\" returns successfully" May 14 00:00:45.194678 containerd[1478]: time="2025-05-14T00:00:45.194227505Z" level=info msg="StartContainer for \"713a54707fc307fac7a6e2fb2bc9aa2752ea6a097cb232542e8fc657eb21ff65\" returns successfully" May 14 00:00:45.216052 containerd[1478]: time="2025-05-14T00:00:45.216013059Z" level=info msg="StartContainer for \"5426041fce2b17034d1b9eaaf5b6f1224dd899807e5bfb150d768cf73f575afd\" returns successfully" May 14 00:00:45.227061 kubelet[2349]: E0514 00:00:45.227018 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="1.6s" May 14 00:00:45.297238 kubelet[2349]: W0514 00:00:45.297177 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:45.297238 kubelet[2349]: E0514 00:00:45.297242 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused May 14 00:00:45.332019 kubelet[2349]: I0514 00:00:45.331598 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:45.332019 kubelet[2349]: E0514 00:00:45.331939 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" May 14 00:00:46.744078 kubelet[2349]: E0514 00:00:46.744039 2349 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 14 00:00:46.832049 kubelet[2349]: E0514 00:00:46.831781 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 00:00:46.935661 kubelet[2349]: I0514 00:00:46.935620 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:46.943917 kubelet[2349]: I0514 00:00:46.943710 2349 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:00:46.951494 kubelet[2349]: E0514 00:00:46.951460 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.051865 kubelet[2349]: E0514 00:00:47.051733 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.151995 kubelet[2349]: E0514 00:00:47.151947 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.252516 kubelet[2349]: E0514 00:00:47.252463 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.353395 kubelet[2349]: E0514 00:00:47.353273 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.454304 kubelet[2349]: E0514 00:00:47.453866 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.554600 kubelet[2349]: E0514 00:00:47.554552 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.655445 kubelet[2349]: E0514 00:00:47.655312 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.755714 kubelet[2349]: E0514 00:00:47.755662 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:47.856104 kubelet[2349]: E0514 00:00:47.855998 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:48.374869 systemd[1]: Reload requested from client PID 2620 ('systemctl') (unit session-7.scope)... May 14 00:00:48.374888 systemd[1]: Reloading... May 14 00:00:48.446410 zram_generator::config[2664]: No configuration found. May 14 00:00:48.533163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:48.618812 systemd[1]: Reloading finished in 243 ms. May 14 00:00:48.639915 kubelet[2349]: I0514 00:00:48.639677 2349 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:48.639915 kubelet[2349]: E0514 00:00:48.639670 2349 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.183f3bb871ec5390 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:00:43.814376336 +0000 UTC m=+1.391018164,LastTimestamp:2025-05-14 00:00:43.814376336 +0000 UTC m=+1.391018164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:00:48.639900 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:48.650888 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:00:48.652442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:48.652494 systemd[1]: kubelet.service: Consumed 1.711s CPU time, 114.8M memory peak. May 14 00:00:48.655117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:48.790997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:48.802756 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:00:48.841420 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:48.841420 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:00:48.841420 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:48.841778 kubelet[2706]: I0514 00:00:48.841457 2706 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:00:48.847446 kubelet[2706]: I0514 00:00:48.847411 2706 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:00:48.847446 kubelet[2706]: I0514 00:00:48.847440 2706 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:00:48.847632 kubelet[2706]: I0514 00:00:48.847610 2706 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:00:48.849048 kubelet[2706]: I0514 00:00:48.849022 2706 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:00:48.850517 kubelet[2706]: I0514 00:00:48.850492 2706 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:48.855790 kubelet[2706]: I0514 00:00:48.855760 2706 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:00:48.855982 kubelet[2706]: I0514 00:00:48.855954 2706 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:00:48.856149 kubelet[2706]: I0514 00:00:48.855983 2706 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:00:48.856149 kubelet[2706]: I0514 00:00:48.856147 2706 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:00:48.856253 kubelet[2706]: I0514 00:00:48.856155 2706 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:00:48.856253 kubelet[2706]: I0514 00:00:48.856188 2706 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:48.856304 kubelet[2706]: I0514 00:00:48.856291 2706 kubelet.go:400] "Attempting to sync node with API server" May 14 00:00:48.856333 kubelet[2706]: I0514 00:00:48.856307 2706 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:00:48.856360 kubelet[2706]: I0514 00:00:48.856334 2706 kubelet.go:312] "Adding apiserver pod source" May 14 00:00:48.856360 kubelet[2706]: I0514 00:00:48.856355 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:00:48.857415 kubelet[2706]: I0514 00:00:48.857056 2706 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:00:48.857415 kubelet[2706]: I0514 00:00:48.857232 2706 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:00:48.858683 kubelet[2706]: I0514 00:00:48.857643 2706 server.go:1264] "Started kubelet" May 14 00:00:48.858683 kubelet[2706]: I0514 00:00:48.858316 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:00:48.858683 kubelet[2706]: I0514 00:00:48.858570 2706 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:00:48.858683 kubelet[2706]: I0514 00:00:48.858613 2706 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:00:48.865017 kubelet[2706]: I0514 00:00:48.864973 2706 server.go:455] "Adding debug handlers to kubelet server" May 14 00:00:48.867448 kubelet[2706]: I0514 00:00:48.867421 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:00:48.870636 kubelet[2706]: I0514 00:00:48.870599 2706 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:00:48.870786 kubelet[2706]: I0514 00:00:48.870766 2706 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:00:48.871548 kubelet[2706]: I0514 00:00:48.871517 2706 reconciler.go:26] "Reconciler: start to sync state" May 14 00:00:48.872409 kubelet[2706]: I0514 00:00:48.872351 2706 factory.go:221] Registration of the systemd container factory successfully May 14 00:00:48.872507 kubelet[2706]: I0514 00:00:48.872480 2706 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:00:48.881201 kubelet[2706]: I0514 00:00:48.881172 2706 factory.go:221] Registration of the containerd container factory successfully May 14 00:00:48.886025 kubelet[2706]: I0514 00:00:48.885977 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:00:48.886863 kubelet[2706]: I0514 00:00:48.886843 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:00:48.886908 kubelet[2706]: I0514 00:00:48.886887 2706 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:00:48.886908 kubelet[2706]: I0514 00:00:48.886907 2706 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:00:48.886975 kubelet[2706]: E0514 00:00:48.886947 2706 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:00:48.919104 kubelet[2706]: I0514 00:00:48.917846 2706 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:00:48.919104 kubelet[2706]: I0514 00:00:48.917867 2706 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:00:48.919104 kubelet[2706]: I0514 00:00:48.917890 2706 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:48.919104 kubelet[2706]: I0514 00:00:48.918051 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:00:48.919104 kubelet[2706]: I0514 00:00:48.918061 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:00:48.919104 kubelet[2706]: I0514 00:00:48.918077 2706 policy_none.go:49] "None policy: Start" May 14 00:00:48.919499 kubelet[2706]: I0514 00:00:48.919239 2706 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:00:48.919499 kubelet[2706]: I0514 00:00:48.919260 2706 state_mem.go:35] "Initializing new in-memory state store" May 14 00:00:48.919499 kubelet[2706]: I0514 00:00:48.919409 2706 state_mem.go:75] "Updated machine memory state" May 14 00:00:48.923521 kubelet[2706]: I0514 00:00:48.923496 2706 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:00:48.923703 kubelet[2706]: I0514 00:00:48.923662 2706 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:00:48.923844 kubelet[2706]: I0514 00:00:48.923752 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:00:48.973385 kubelet[2706]: I0514 00:00:48.972935 2706 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:48.979960 kubelet[2706]: I0514 00:00:48.979808 2706 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 00:00:48.979960 kubelet[2706]: I0514 00:00:48.979903 2706 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:00:48.987398 kubelet[2706]: I0514 00:00:48.987328 2706 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:00:48.987496 kubelet[2706]: I0514 00:00:48.987459 2706 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:00:48.987540 kubelet[2706]: I0514 00:00:48.987495 2706 topology_manager.go:215] "Topology Admit Handler" podUID="dcbb89a08ffb7ce4271ae857ee92db54" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:00:49.072021 kubelet[2706]: I0514 00:00:49.071967 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:49.072021 kubelet[2706]: I0514 00:00:49.072011 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:49.072021 kubelet[2706]: I0514 00:00:49.072032 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:49.072217 kubelet[2706]: I0514 00:00:49.072049 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:49.072217 kubelet[2706]: I0514 00:00:49.072079 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:00:49.072217 kubelet[2706]: I0514 00:00:49.072100 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcbb89a08ffb7ce4271ae857ee92db54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcbb89a08ffb7ce4271ae857ee92db54\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:49.072217 kubelet[2706]: I0514 00:00:49.072116 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcbb89a08ffb7ce4271ae857ee92db54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dcbb89a08ffb7ce4271ae857ee92db54\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:49.072217 kubelet[2706]: I0514 00:00:49.072132 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:49.072323 kubelet[2706]: I0514 00:00:49.072170 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcbb89a08ffb7ce4271ae857ee92db54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dcbb89a08ffb7ce4271ae857ee92db54\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:49.857594 kubelet[2706]: I0514 00:00:49.857545 2706 apiserver.go:52] "Watching apiserver" May 14 00:00:49.871483 kubelet[2706]: I0514 00:00:49.871436 2706 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:00:49.960967 kubelet[2706]: E0514 00:00:49.960917 2706 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:00:49.964414 kubelet[2706]: E0514 00:00:49.964375 2706 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 00:00:49.992191 kubelet[2706]: I0514 00:00:49.992130 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.992094382 podStartE2EDuration="992.094382ms" podCreationTimestamp="2025-05-14 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:49.964716731 +0000 UTC m=+1.158822488" watchObservedRunningTime="2025-05-14 00:00:49.992094382 +0000 UTC m=+1.186200100" May 14 00:00:50.011090 kubelet[2706]: I0514 00:00:50.010718 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.010685478 podStartE2EDuration="1.010685478s" podCreationTimestamp="2025-05-14 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:49.992391178 +0000 UTC m=+1.186496935" watchObservedRunningTime="2025-05-14 00:00:50.010685478 +0000 UTC m=+1.204791195" May 14 00:00:50.024712 kubelet[2706]: I0514 00:00:50.024633 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.024615393 podStartE2EDuration="1.024615393s" podCreationTimestamp="2025-05-14 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:50.010879115 +0000 UTC m=+1.204984873" watchObservedRunningTime="2025-05-14 00:00:50.024615393 +0000 UTC m=+1.218721110" May 14 00:00:53.635413 sudo[1685]: pam_unix(sudo:session): session closed for user root May 14 00:00:53.638395 sshd[1684]: Connection closed by 10.0.0.1 port 46406 May 14 00:00:53.638967 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 14 00:00:53.642719 systemd[1]: sshd@6-10.0.0.146:22-10.0.0.1:46406.service: Deactivated successfully. May 14 00:00:53.646226 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:00:53.646478 systemd[1]: session-7.scope: Consumed 7.078s CPU time, 243.1M memory peak. May 14 00:00:53.647522 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. May 14 00:00:53.648334 systemd-logind[1465]: Removed session 7. May 14 00:01:03.027484 kubelet[2706]: I0514 00:01:03.027443 2706 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:01:03.034815 containerd[1478]: time="2025-05-14T00:01:03.034770055Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:01:03.035113 kubelet[2706]: I0514 00:01:03.034986 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:01:03.843703 kubelet[2706]: I0514 00:01:03.843654 2706 topology_manager.go:215] "Topology Admit Handler" podUID="f6c1fd92-f458-4c03-a05f-666440d7559d" podNamespace="kube-system" podName="kube-proxy-24rm4" May 14 00:01:03.860563 systemd[1]: Created slice kubepods-besteffort-podf6c1fd92_f458_4c03_a05f_666440d7559d.slice - libcontainer container kubepods-besteffort-podf6c1fd92_f458_4c03_a05f_666440d7559d.slice. May 14 00:01:03.868901 kubelet[2706]: I0514 00:01:03.868857 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6c1fd92-f458-4c03-a05f-666440d7559d-kube-proxy\") pod \"kube-proxy-24rm4\" (UID: \"f6c1fd92-f458-4c03-a05f-666440d7559d\") " pod="kube-system/kube-proxy-24rm4" May 14 00:01:03.868901 kubelet[2706]: I0514 00:01:03.868900 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6c1fd92-f458-4c03-a05f-666440d7559d-xtables-lock\") pod \"kube-proxy-24rm4\" (UID: \"f6c1fd92-f458-4c03-a05f-666440d7559d\") " pod="kube-system/kube-proxy-24rm4" May 14 00:01:03.869138 kubelet[2706]: I0514 00:01:03.868919 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6c1fd92-f458-4c03-a05f-666440d7559d-lib-modules\") pod \"kube-proxy-24rm4\" (UID: \"f6c1fd92-f458-4c03-a05f-666440d7559d\") " pod="kube-system/kube-proxy-24rm4" May 14 00:01:03.869138 kubelet[2706]: I0514 00:01:03.868937 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkpw\" (UniqueName: \"kubernetes.io/projected/f6c1fd92-f458-4c03-a05f-666440d7559d-kube-api-access-tlkpw\") pod \"kube-proxy-24rm4\" (UID: \"f6c1fd92-f458-4c03-a05f-666440d7559d\") " pod="kube-system/kube-proxy-24rm4" May 14 00:01:03.952592 kubelet[2706]: I0514 00:01:03.952543 2706 topology_manager.go:215] "Topology Admit Handler" podUID="025a7bd8-467d-4e8a-9f19-dafed69d63e2" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-n7fzp" May 14 00:01:03.961425 systemd[1]: Created slice kubepods-besteffort-pod025a7bd8_467d_4e8a_9f19_dafed69d63e2.slice - libcontainer container kubepods-besteffort-pod025a7bd8_467d_4e8a_9f19_dafed69d63e2.slice. May 14 00:01:04.070192 kubelet[2706]: I0514 00:01:04.070151 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfc5l\" (UniqueName: \"kubernetes.io/projected/025a7bd8-467d-4e8a-9f19-dafed69d63e2-kube-api-access-mfc5l\") pod \"tigera-operator-797db67f8-n7fzp\" (UID: \"025a7bd8-467d-4e8a-9f19-dafed69d63e2\") " pod="tigera-operator/tigera-operator-797db67f8-n7fzp" May 14 00:01:04.070192 kubelet[2706]: I0514 00:01:04.070195 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/025a7bd8-467d-4e8a-9f19-dafed69d63e2-var-lib-calico\") pod \"tigera-operator-797db67f8-n7fzp\" (UID: \"025a7bd8-467d-4e8a-9f19-dafed69d63e2\") " pod="tigera-operator/tigera-operator-797db67f8-n7fzp" May 14 00:01:04.172288 containerd[1478]: time="2025-05-14T00:01:04.172208132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-24rm4,Uid:f6c1fd92-f458-4c03-a05f-666440d7559d,Namespace:kube-system,Attempt:0,}" May 14 00:01:04.223541 containerd[1478]: time="2025-05-14T00:01:04.223492201Z" level=info msg="connecting to shim b243dede7d1abd928444ac19c5a6ec6a5474efb85f5c465091e5bed4bc0c6482" address="unix:///run/containerd/s/702344846c2d0a68f017ed84f01a7dd90f4c9ac824be18d77bca5ea77d331856" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:04.252560 systemd[1]: Started cri-containerd-b243dede7d1abd928444ac19c5a6ec6a5474efb85f5c465091e5bed4bc0c6482.scope - libcontainer container b243dede7d1abd928444ac19c5a6ec6a5474efb85f5c465091e5bed4bc0c6482. May 14 00:01:04.284893 containerd[1478]: time="2025-05-14T00:01:04.284836908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-n7fzp,Uid:025a7bd8-467d-4e8a-9f19-dafed69d63e2,Namespace:tigera-operator,Attempt:0,}" May 14 00:01:04.302939 containerd[1478]: time="2025-05-14T00:01:04.302703192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-24rm4,Uid:f6c1fd92-f458-4c03-a05f-666440d7559d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b243dede7d1abd928444ac19c5a6ec6a5474efb85f5c465091e5bed4bc0c6482\"" May 14 00:01:04.305241 containerd[1478]: time="2025-05-14T00:01:04.305209987Z" level=info msg="connecting to shim f4293200f11df7976c61ac42a3effe470987733136ca758c2e8c5c7122e9b7f1" address="unix:///run/containerd/s/cd4a68d943cba9215891d56fad2c9e5347fb5027ba5a3da742fbeef97d47a51b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:04.308243 containerd[1478]: time="2025-05-14T00:01:04.308181131Z" level=info msg="CreateContainer within sandbox \"b243dede7d1abd928444ac19c5a6ec6a5474efb85f5c465091e5bed4bc0c6482\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:01:04.318862 containerd[1478]: time="2025-05-14T00:01:04.317885353Z" level=info msg="Container fe915532fc08e66ce8b7cd89022424d8013eb3904baa66c188817cd43ebc1ca3: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:04.326405 containerd[1478]: time="2025-05-14T00:01:04.325856320Z" level=info msg="CreateContainer within sandbox \"b243dede7d1abd928444ac19c5a6ec6a5474efb85f5c465091e5bed4bc0c6482\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe915532fc08e66ce8b7cd89022424d8013eb3904baa66c188817cd43ebc1ca3\"" May 14 00:01:04.334194 containerd[1478]: time="2025-05-14T00:01:04.331504543Z" level=info msg="StartContainer for \"fe915532fc08e66ce8b7cd89022424d8013eb3904baa66c188817cd43ebc1ca3\"" May 14 00:01:04.334194 containerd[1478]: time="2025-05-14T00:01:04.332889986Z" level=info msg="connecting to shim fe915532fc08e66ce8b7cd89022424d8013eb3904baa66c188817cd43ebc1ca3" address="unix:///run/containerd/s/702344846c2d0a68f017ed84f01a7dd90f4c9ac824be18d77bca5ea77d331856" protocol=ttrpc version=3 May 14 00:01:04.337582 systemd[1]: Started cri-containerd-f4293200f11df7976c61ac42a3effe470987733136ca758c2e8c5c7122e9b7f1.scope - libcontainer container f4293200f11df7976c61ac42a3effe470987733136ca758c2e8c5c7122e9b7f1. May 14 00:01:04.360620 systemd[1]: Started cri-containerd-fe915532fc08e66ce8b7cd89022424d8013eb3904baa66c188817cd43ebc1ca3.scope - libcontainer container fe915532fc08e66ce8b7cd89022424d8013eb3904baa66c188817cd43ebc1ca3. May 14 00:01:04.379961 containerd[1478]: time="2025-05-14T00:01:04.379920120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-n7fzp,Uid:025a7bd8-467d-4e8a-9f19-dafed69d63e2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f4293200f11df7976c61ac42a3effe470987733136ca758c2e8c5c7122e9b7f1\"" May 14 00:01:04.390665 containerd[1478]: time="2025-05-14T00:01:04.390585535Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 00:01:04.414795 containerd[1478]: time="2025-05-14T00:01:04.413388611Z" level=info msg="StartContainer for \"fe915532fc08e66ce8b7cd89022424d8013eb3904baa66c188817cd43ebc1ca3\" returns successfully" May 14 00:01:05.069015 update_engine[1467]: I20250514 00:01:05.068904 1467 update_attempter.cc:509] Updating boot flags... May 14 00:01:05.098403 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3039) May 14 00:01:05.137613 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2987) May 14 00:01:05.190535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2987) May 14 00:01:07.526922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866928345.mount: Deactivated successfully. May 14 00:01:08.902177 kubelet[2706]: I0514 00:01:08.902041 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-24rm4" podStartSLOduration=5.902021009 podStartE2EDuration="5.902021009s" podCreationTimestamp="2025-05-14 00:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:04.951972672 +0000 UTC m=+16.146078389" watchObservedRunningTime="2025-05-14 00:01:08.902021009 +0000 UTC m=+20.096126726" May 14 00:01:09.162672 containerd[1478]: time="2025-05-14T00:01:09.162549779Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:09.163598 containerd[1478]: time="2025-05-14T00:01:09.163384902Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 00:01:09.164276 containerd[1478]: time="2025-05-14T00:01:09.164236630Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:09.166760 containerd[1478]: time="2025-05-14T00:01:09.166717668Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:09.167459 containerd[1478]: time="2025-05-14T00:01:09.167432704Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 4.776804108s" May 14 00:01:09.167513 containerd[1478]: time="2025-05-14T00:01:09.167465436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 00:01:09.170652 containerd[1478]: time="2025-05-14T00:01:09.170603767Z" level=info msg="CreateContainer within sandbox \"f4293200f11df7976c61ac42a3effe470987733136ca758c2e8c5c7122e9b7f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 00:01:09.176319 containerd[1478]: time="2025-05-14T00:01:09.175751994Z" level=info msg="Container dcc1eb9a5167bee859fe2ef36cf30e7365f918ef5aeeaa51b696bf3e66a5d838: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:09.189204 containerd[1478]: time="2025-05-14T00:01:09.189147003Z" level=info msg="CreateContainer within sandbox \"f4293200f11df7976c61ac42a3effe470987733136ca758c2e8c5c7122e9b7f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dcc1eb9a5167bee859fe2ef36cf30e7365f918ef5aeeaa51b696bf3e66a5d838\"" May 14 00:01:09.191389 containerd[1478]: time="2025-05-14T00:01:09.191345131Z" level=info msg="StartContainer for \"dcc1eb9a5167bee859fe2ef36cf30e7365f918ef5aeeaa51b696bf3e66a5d838\"" May 14 00:01:09.192142 containerd[1478]: time="2025-05-14T00:01:09.192103143Z" level=info msg="connecting to shim dcc1eb9a5167bee859fe2ef36cf30e7365f918ef5aeeaa51b696bf3e66a5d838" address="unix:///run/containerd/s/cd4a68d943cba9215891d56fad2c9e5347fb5027ba5a3da742fbeef97d47a51b" protocol=ttrpc version=3 May 14 00:01:09.229535 systemd[1]: Started cri-containerd-dcc1eb9a5167bee859fe2ef36cf30e7365f918ef5aeeaa51b696bf3e66a5d838.scope - libcontainer container dcc1eb9a5167bee859fe2ef36cf30e7365f918ef5aeeaa51b696bf3e66a5d838. May 14 00:01:09.264529 containerd[1478]: time="2025-05-14T00:01:09.263932460Z" level=info msg="StartContainer for \"dcc1eb9a5167bee859fe2ef36cf30e7365f918ef5aeeaa51b696bf3e66a5d838\" returns successfully" May 14 00:01:09.964249 kubelet[2706]: I0514 00:01:09.964117 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-n7fzp" podStartSLOduration=2.175953427 podStartE2EDuration="6.964099955s" podCreationTimestamp="2025-05-14 00:01:03 +0000 UTC" firstStartedPulling="2025-05-14 00:01:04.381229365 +0000 UTC m=+15.575335082" lastFinishedPulling="2025-05-14 00:01:09.169375893 +0000 UTC m=+20.363481610" observedRunningTime="2025-05-14 00:01:09.963653903 +0000 UTC m=+21.157759620" watchObservedRunningTime="2025-05-14 00:01:09.964099955 +0000 UTC m=+21.158205672" May 14 00:01:13.969683 kubelet[2706]: I0514 00:01:13.969227 2706 topology_manager.go:215] "Topology Admit Handler" podUID="9683a186-dbe8-48ed-affc-26e4956fc780" podNamespace="calico-system" podName="calico-typha-54d8f6f784-mwdtk" May 14 00:01:13.986530 systemd[1]: Created slice kubepods-besteffort-pod9683a186_dbe8_48ed_affc_26e4956fc780.slice - libcontainer container kubepods-besteffort-pod9683a186_dbe8_48ed_affc_26e4956fc780.slice. May 14 00:01:14.035939 kubelet[2706]: I0514 00:01:14.035899 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlvjm\" (UniqueName: \"kubernetes.io/projected/9683a186-dbe8-48ed-affc-26e4956fc780-kube-api-access-qlvjm\") pod \"calico-typha-54d8f6f784-mwdtk\" (UID: \"9683a186-dbe8-48ed-affc-26e4956fc780\") " pod="calico-system/calico-typha-54d8f6f784-mwdtk" May 14 00:01:14.035939 kubelet[2706]: I0514 00:01:14.035942 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9683a186-dbe8-48ed-affc-26e4956fc780-tigera-ca-bundle\") pod \"calico-typha-54d8f6f784-mwdtk\" (UID: \"9683a186-dbe8-48ed-affc-26e4956fc780\") " pod="calico-system/calico-typha-54d8f6f784-mwdtk" May 14 00:01:14.036114 kubelet[2706]: I0514 00:01:14.035962 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9683a186-dbe8-48ed-affc-26e4956fc780-typha-certs\") pod \"calico-typha-54d8f6f784-mwdtk\" (UID: \"9683a186-dbe8-48ed-affc-26e4956fc780\") " pod="calico-system/calico-typha-54d8f6f784-mwdtk" May 14 00:01:14.163004 kubelet[2706]: I0514 00:01:14.162956 2706 topology_manager.go:215] "Topology Admit Handler" podUID="48d19a8b-2869-45ae-8f43-f1f0b148bd36" podNamespace="calico-system" podName="calico-node-fllxf" May 14 00:01:14.169900 systemd[1]: Created slice kubepods-besteffort-pod48d19a8b_2869_45ae_8f43_f1f0b148bd36.slice - libcontainer container kubepods-besteffort-pod48d19a8b_2869_45ae_8f43_f1f0b148bd36.slice. May 14 00:01:14.238835 kubelet[2706]: I0514 00:01:14.238716 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-flexvol-driver-host\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.238835 kubelet[2706]: I0514 00:01:14.238774 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-var-run-calico\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.238835 kubelet[2706]: I0514 00:01:14.238797 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-var-lib-calico\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.238835 kubelet[2706]: I0514 00:01:14.238813 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-cni-bin-dir\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.238835 kubelet[2706]: I0514 00:01:14.238836 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-lib-modules\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.239034 kubelet[2706]: I0514 00:01:14.238859 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-policysync\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.239034 kubelet[2706]: I0514 00:01:14.238879 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48d19a8b-2869-45ae-8f43-f1f0b148bd36-tigera-ca-bundle\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.239034 kubelet[2706]: I0514 00:01:14.238911 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/48d19a8b-2869-45ae-8f43-f1f0b148bd36-node-certs\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.239034 kubelet[2706]: I0514 00:01:14.238930 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-cni-log-dir\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.239034 kubelet[2706]: I0514 00:01:14.238952 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnth4\" (UniqueName: \"kubernetes.io/projected/48d19a8b-2869-45ae-8f43-f1f0b148bd36-kube-api-access-lnth4\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.239139 kubelet[2706]: I0514 00:01:14.238984 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-cni-net-dir\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.239139 kubelet[2706]: I0514 00:01:14.239005 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48d19a8b-2869-45ae-8f43-f1f0b148bd36-xtables-lock\") pod \"calico-node-fllxf\" (UID: \"48d19a8b-2869-45ae-8f43-f1f0b148bd36\") " pod="calico-system/calico-node-fllxf" May 14 00:01:14.292539 containerd[1478]: time="2025-05-14T00:01:14.292486223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d8f6f784-mwdtk,Uid:9683a186-dbe8-48ed-affc-26e4956fc780,Namespace:calico-system,Attempt:0,}" May 14 00:01:14.309673 containerd[1478]: time="2025-05-14T00:01:14.309628511Z" level=info msg="connecting to shim 83ec1b159877a160b1a58a4b5bbc9751a354cfc28009b4a84422959ecda5ec8b" address="unix:///run/containerd/s/6bb32b7b813c1fd7b3ef6f1672c00503924d2449eff5ae4569a3915c1eda946a" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:14.329538 systemd[1]: Started cri-containerd-83ec1b159877a160b1a58a4b5bbc9751a354cfc28009b4a84422959ecda5ec8b.scope - libcontainer container 83ec1b159877a160b1a58a4b5bbc9751a354cfc28009b4a84422959ecda5ec8b. May 14 00:01:14.346396 kubelet[2706]: E0514 00:01:14.344968 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.346396 kubelet[2706]: W0514 00:01:14.344992 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.346396 kubelet[2706]: E0514 00:01:14.345023 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.346396 kubelet[2706]: E0514 00:01:14.345355 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.346396 kubelet[2706]: W0514 00:01:14.345392 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.346396 kubelet[2706]: E0514 00:01:14.345404 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.361358 kubelet[2706]: E0514 00:01:14.361123 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.362114 kubelet[2706]: W0514 00:01:14.362042 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.362114 kubelet[2706]: E0514 00:01:14.362076 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.383252 kubelet[2706]: I0514 00:01:14.383117 2706 topology_manager.go:215] "Topology Admit Handler" podUID="22d9bc0e-3492-4477-b79c-f76bddcb0f2b" podNamespace="calico-system" podName="csi-node-driver-72zk5" May 14 00:01:14.384231 kubelet[2706]: E0514 00:01:14.383562 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72zk5" podUID="22d9bc0e-3492-4477-b79c-f76bddcb0f2b" May 14 00:01:14.439858 kubelet[2706]: E0514 00:01:14.439830 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.439858 kubelet[2706]: W0514 00:01:14.439850 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.440037 kubelet[2706]: E0514 00:01:14.439870 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.440078 kubelet[2706]: E0514 00:01:14.440060 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.440078 kubelet[2706]: W0514 00:01:14.440072 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.440185 kubelet[2706]: E0514 00:01:14.440080 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.440233 kubelet[2706]: E0514 00:01:14.440221 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.440233 kubelet[2706]: W0514 00:01:14.440231 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.440290 kubelet[2706]: E0514 00:01:14.440240 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.440397 kubelet[2706]: E0514 00:01:14.440387 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.440397 kubelet[2706]: W0514 00:01:14.440397 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.440482 kubelet[2706]: E0514 00:01:14.440405 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.440566 kubelet[2706]: E0514 00:01:14.440555 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.440566 kubelet[2706]: W0514 00:01:14.440564 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.440619 kubelet[2706]: E0514 00:01:14.440572 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.440700 kubelet[2706]: E0514 00:01:14.440690 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.440700 kubelet[2706]: W0514 00:01:14.440699 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.440757 kubelet[2706]: E0514 00:01:14.440706 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.440851 kubelet[2706]: E0514 00:01:14.440842 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.440851 kubelet[2706]: W0514 00:01:14.440850 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.440907 kubelet[2706]: E0514 00:01:14.440858 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.440995 kubelet[2706]: E0514 00:01:14.440985 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.440995 kubelet[2706]: W0514 00:01:14.440994 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.441072 kubelet[2706]: E0514 00:01:14.441002 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.441147 kubelet[2706]: E0514 00:01:14.441137 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.441147 kubelet[2706]: W0514 00:01:14.441146 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.441204 kubelet[2706]: E0514 00:01:14.441154 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.441279 kubelet[2706]: E0514 00:01:14.441270 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.441279 kubelet[2706]: W0514 00:01:14.441279 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.441342 kubelet[2706]: E0514 00:01:14.441287 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.441428 kubelet[2706]: E0514 00:01:14.441419 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.441428 kubelet[2706]: W0514 00:01:14.441427 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.441486 kubelet[2706]: E0514 00:01:14.441434 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.441574 kubelet[2706]: E0514 00:01:14.441565 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.441574 kubelet[2706]: W0514 00:01:14.441574 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.441790 kubelet[2706]: E0514 00:01:14.441582 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.441790 kubelet[2706]: E0514 00:01:14.441719 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.441790 kubelet[2706]: W0514 00:01:14.441739 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.441790 kubelet[2706]: E0514 00:01:14.441746 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.441895 kubelet[2706]: E0514 00:01:14.441864 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.441895 kubelet[2706]: W0514 00:01:14.441870 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.441895 kubelet[2706]: E0514 00:01:14.441877 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.442016 kubelet[2706]: E0514 00:01:14.442006 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.442016 kubelet[2706]: W0514 00:01:14.442014 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.442016 kubelet[2706]: E0514 00:01:14.442022 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.442161 kubelet[2706]: E0514 00:01:14.442141 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.442161 kubelet[2706]: W0514 00:01:14.442152 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.442161 kubelet[2706]: E0514 00:01:14.442159 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.442315 kubelet[2706]: E0514 00:01:14.442304 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.442315 kubelet[2706]: W0514 00:01:14.442314 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.442379 kubelet[2706]: E0514 00:01:14.442321 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.442454 kubelet[2706]: E0514 00:01:14.442444 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.442454 kubelet[2706]: W0514 00:01:14.442453 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.442550 kubelet[2706]: E0514 00:01:14.442460 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.442578 kubelet[2706]: E0514 00:01:14.442573 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.442603 kubelet[2706]: W0514 00:01:14.442579 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.442603 kubelet[2706]: E0514 00:01:14.442585 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.442697 kubelet[2706]: E0514 00:01:14.442688 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.442697 kubelet[2706]: W0514 00:01:14.442696 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.442754 kubelet[2706]: E0514 00:01:14.442703 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.443768 kubelet[2706]: E0514 00:01:14.443422 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.443768 kubelet[2706]: W0514 00:01:14.443436 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.443768 kubelet[2706]: E0514 00:01:14.443449 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.443768 kubelet[2706]: I0514 00:01:14.443523 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22d9bc0e-3492-4477-b79c-f76bddcb0f2b-kubelet-dir\") pod \"csi-node-driver-72zk5\" (UID: \"22d9bc0e-3492-4477-b79c-f76bddcb0f2b\") " pod="calico-system/csi-node-driver-72zk5" May 14 00:01:14.444753 kubelet[2706]: E0514 00:01:14.443897 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.444753 kubelet[2706]: W0514 00:01:14.443907 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.444753 kubelet[2706]: E0514 00:01:14.443918 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.444753 kubelet[2706]: I0514 00:01:14.443933 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/22d9bc0e-3492-4477-b79c-f76bddcb0f2b-socket-dir\") pod \"csi-node-driver-72zk5\" (UID: \"22d9bc0e-3492-4477-b79c-f76bddcb0f2b\") " pod="calico-system/csi-node-driver-72zk5" May 14 00:01:14.444753 kubelet[2706]: E0514 00:01:14.444151 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.444753 kubelet[2706]: W0514 00:01:14.444161 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.444753 kubelet[2706]: E0514 00:01:14.444169 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.444753 kubelet[2706]: I0514 00:01:14.444182 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hbc5\" (UniqueName: \"kubernetes.io/projected/22d9bc0e-3492-4477-b79c-f76bddcb0f2b-kube-api-access-4hbc5\") pod \"csi-node-driver-72zk5\" (UID: \"22d9bc0e-3492-4477-b79c-f76bddcb0f2b\") " pod="calico-system/csi-node-driver-72zk5" May 14 00:01:14.444753 kubelet[2706]: E0514 00:01:14.444417 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.444934 kubelet[2706]: W0514 00:01:14.444428 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.444934 kubelet[2706]: E0514 00:01:14.444436 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.444934 kubelet[2706]: I0514 00:01:14.444450 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/22d9bc0e-3492-4477-b79c-f76bddcb0f2b-registration-dir\") pod \"csi-node-driver-72zk5\" (UID: \"22d9bc0e-3492-4477-b79c-f76bddcb0f2b\") " pod="calico-system/csi-node-driver-72zk5" May 14 00:01:14.446288 kubelet[2706]: E0514 00:01:14.445024 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.446288 kubelet[2706]: W0514 00:01:14.445042 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.446288 kubelet[2706]: E0514 00:01:14.445108 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.446288 kubelet[2706]: I0514 00:01:14.445130 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/22d9bc0e-3492-4477-b79c-f76bddcb0f2b-varrun\") pod \"csi-node-driver-72zk5\" (UID: \"22d9bc0e-3492-4477-b79c-f76bddcb0f2b\") " pod="calico-system/csi-node-driver-72zk5" May 14 00:01:14.446288 kubelet[2706]: E0514 00:01:14.445337 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.446288 kubelet[2706]: W0514 00:01:14.445346 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.446288 kubelet[2706]: E0514 00:01:14.445357 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.446288 kubelet[2706]: E0514 00:01:14.445600 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.446288 kubelet[2706]: W0514 00:01:14.445609 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.447409 kubelet[2706]: E0514 00:01:14.445624 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.447409 kubelet[2706]: E0514 00:01:14.445837 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.447409 kubelet[2706]: W0514 00:01:14.445853 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.447409 kubelet[2706]: E0514 00:01:14.445868 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.447409 kubelet[2706]: E0514 00:01:14.446055 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.447409 kubelet[2706]: W0514 00:01:14.446063 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.447409 kubelet[2706]: E0514 00:01:14.446142 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.447409 kubelet[2706]: E0514 00:01:14.446323 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.447409 kubelet[2706]: W0514 00:01:14.446333 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.447409 kubelet[2706]: E0514 00:01:14.446460 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.448045 kubelet[2706]: E0514 00:01:14.446615 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.448045 kubelet[2706]: W0514 00:01:14.446625 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.448045 kubelet[2706]: E0514 00:01:14.446765 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.448045 kubelet[2706]: W0514 00:01:14.446771 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.448045 kubelet[2706]: E0514 00:01:14.446873 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.448045 kubelet[2706]: W0514 00:01:14.446879 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.448045 kubelet[2706]: E0514 00:01:14.446887 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.448045 kubelet[2706]: E0514 00:01:14.446902 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.448045 kubelet[2706]: E0514 00:01:14.447008 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.448045 kubelet[2706]: E0514 00:01:14.447063 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.448237 containerd[1478]: time="2025-05-14T00:01:14.447801693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d8f6f784-mwdtk,Uid:9683a186-dbe8-48ed-affc-26e4956fc780,Namespace:calico-system,Attempt:0,} returns sandbox id \"83ec1b159877a160b1a58a4b5bbc9751a354cfc28009b4a84422959ecda5ec8b\"" May 14 00:01:14.448288 kubelet[2706]: W0514 00:01:14.447103 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.448288 kubelet[2706]: E0514 00:01:14.447116 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.448288 kubelet[2706]: E0514 00:01:14.447955 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.448288 kubelet[2706]: W0514 00:01:14.447966 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.448288 kubelet[2706]: E0514 00:01:14.447977 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.451448 containerd[1478]: time="2025-05-14T00:01:14.451344706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 00:01:14.472949 containerd[1478]: time="2025-05-14T00:01:14.472907357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fllxf,Uid:48d19a8b-2869-45ae-8f43-f1f0b148bd36,Namespace:calico-system,Attempt:0,}" May 14 00:01:14.493267 containerd[1478]: time="2025-05-14T00:01:14.492868474Z" level=info msg="connecting to shim 1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e" address="unix:///run/containerd/s/94068e28b5c92803faa32cab3d2e7711f971e17dc86bcfff6dec804c4f2a284e" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:14.533576 systemd[1]: Started cri-containerd-1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e.scope - libcontainer container 1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e. May 14 00:01:14.546112 kubelet[2706]: E0514 00:01:14.545810 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.546112 kubelet[2706]: W0514 00:01:14.545835 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.546112 kubelet[2706]: E0514 00:01:14.545856 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.546384 kubelet[2706]: E0514 00:01:14.546353 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.546742 kubelet[2706]: W0514 00:01:14.546593 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.546742 kubelet[2706]: E0514 00:01:14.546630 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.546917 kubelet[2706]: E0514 00:01:14.546864 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.546917 kubelet[2706]: W0514 00:01:14.546881 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.546917 kubelet[2706]: E0514 00:01:14.546897 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.547087 kubelet[2706]: E0514 00:01:14.547040 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.547087 kubelet[2706]: W0514 00:01:14.547052 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.547087 kubelet[2706]: E0514 00:01:14.547068 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.547286 kubelet[2706]: E0514 00:01:14.547256 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.547286 kubelet[2706]: W0514 00:01:14.547265 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.547286 kubelet[2706]: E0514 00:01:14.547274 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.547740 kubelet[2706]: E0514 00:01:14.547721 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.547740 kubelet[2706]: W0514 00:01:14.547735 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.547822 kubelet[2706]: E0514 00:01:14.547745 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.548157 kubelet[2706]: E0514 00:01:14.547912 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.548157 kubelet[2706]: W0514 00:01:14.547923 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.548157 kubelet[2706]: E0514 00:01:14.547931 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.548259 kubelet[2706]: E0514 00:01:14.548180 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.548259 kubelet[2706]: W0514 00:01:14.548189 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.548259 kubelet[2706]: E0514 00:01:14.548199 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.548335 kubelet[2706]: E0514 00:01:14.548324 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.548335 kubelet[2706]: W0514 00:01:14.548331 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.548402 kubelet[2706]: E0514 00:01:14.548339 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.548687 kubelet[2706]: E0514 00:01:14.548581 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.548687 kubelet[2706]: W0514 00:01:14.548597 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.548687 kubelet[2706]: E0514 00:01:14.548612 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.549153 kubelet[2706]: E0514 00:01:14.549021 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.549153 kubelet[2706]: W0514 00:01:14.549037 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.549153 kubelet[2706]: E0514 00:01:14.549050 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.549309 kubelet[2706]: E0514 00:01:14.549294 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.549359 kubelet[2706]: W0514 00:01:14.549349 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.549448 kubelet[2706]: E0514 00:01:14.549434 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.549665 kubelet[2706]: E0514 00:01:14.549641 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.549665 kubelet[2706]: W0514 00:01:14.549656 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.549731 kubelet[2706]: E0514 00:01:14.549675 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.549842 kubelet[2706]: E0514 00:01:14.549831 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.549871 kubelet[2706]: W0514 00:01:14.549844 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.549871 kubelet[2706]: E0514 00:01:14.549861 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.550029 kubelet[2706]: E0514 00:01:14.550016 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.550065 kubelet[2706]: W0514 00:01:14.550048 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.550097 kubelet[2706]: E0514 00:01:14.550065 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.550321 kubelet[2706]: E0514 00:01:14.550308 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.550352 kubelet[2706]: W0514 00:01:14.550320 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.550352 kubelet[2706]: E0514 00:01:14.550339 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.550614 kubelet[2706]: E0514 00:01:14.550602 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.550648 kubelet[2706]: W0514 00:01:14.550617 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.550680 kubelet[2706]: E0514 00:01:14.550668 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.551015 kubelet[2706]: E0514 00:01:14.550997 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.551015 kubelet[2706]: W0514 00:01:14.551011 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.551191 kubelet[2706]: E0514 00:01:14.551099 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.551661 kubelet[2706]: E0514 00:01:14.551646 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.551695 kubelet[2706]: W0514 00:01:14.551665 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.551716 kubelet[2706]: E0514 00:01:14.551692 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.551912 kubelet[2706]: E0514 00:01:14.551899 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.551912 kubelet[2706]: W0514 00:01:14.551911 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.551972 kubelet[2706]: E0514 00:01:14.551934 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.552131 kubelet[2706]: E0514 00:01:14.552120 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.552165 kubelet[2706]: W0514 00:01:14.552131 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.552165 kubelet[2706]: E0514 00:01:14.552160 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.552387 kubelet[2706]: E0514 00:01:14.552374 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.552387 kubelet[2706]: W0514 00:01:14.552386 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.552517 kubelet[2706]: E0514 00:01:14.552473 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.552679 kubelet[2706]: E0514 00:01:14.552662 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.552679 kubelet[2706]: W0514 00:01:14.552675 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.552739 kubelet[2706]: E0514 00:01:14.552689 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.552871 kubelet[2706]: E0514 00:01:14.552856 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.552871 kubelet[2706]: W0514 00:01:14.552868 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.552927 kubelet[2706]: E0514 00:01:14.552876 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.553112 kubelet[2706]: E0514 00:01:14.553098 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.553112 kubelet[2706]: W0514 00:01:14.553110 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.553168 kubelet[2706]: E0514 00:01:14.553119 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:14.557255 containerd[1478]: time="2025-05-14T00:01:14.557221045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fllxf,Uid:48d19a8b-2869-45ae-8f43-f1f0b148bd36,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e\"" May 14 00:01:14.561169 kubelet[2706]: E0514 00:01:14.561133 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:14.561169 kubelet[2706]: W0514 00:01:14.561154 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:14.561169 kubelet[2706]: E0514 00:01:14.561170 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:15.887692 kubelet[2706]: E0514 00:01:15.887636 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72zk5" podUID="22d9bc0e-3492-4477-b79c-f76bddcb0f2b" May 14 00:01:17.629725 systemd[1]: Started sshd@7-10.0.0.146:22-10.0.0.1:59520.service - OpenSSH per-connection server daemon (10.0.0.1:59520). May 14 00:01:17.690177 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 59520 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:17.691850 sshd-session[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:17.697065 systemd-logind[1465]: New session 8 of user core. May 14 00:01:17.701526 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:01:17.755327 containerd[1478]: time="2025-05-14T00:01:17.755272704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:17.756164 containerd[1478]: time="2025-05-14T00:01:17.756093728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 00:01:17.757512 containerd[1478]: time="2025-05-14T00:01:17.757474424Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:17.759461 containerd[1478]: time="2025-05-14T00:01:17.759428677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:17.760705 containerd[1478]: time="2025-05-14T00:01:17.760676017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 3.309273134s" May 14 00:01:17.760755 containerd[1478]: time="2025-05-14T00:01:17.760718829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 00:01:17.761693 containerd[1478]: time="2025-05-14T00:01:17.761670648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 00:01:17.776575 containerd[1478]: time="2025-05-14T00:01:17.776316762Z" level=info msg="CreateContainer within sandbox \"83ec1b159877a160b1a58a4b5bbc9751a354cfc28009b4a84422959ecda5ec8b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 00:01:17.783667 containerd[1478]: time="2025-05-14T00:01:17.783627996Z" level=info msg="Container 6de74b1ca159c5658a964cc5e67e81af1b9fe582daf7d39002addf21805470fe: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:17.791120 containerd[1478]: time="2025-05-14T00:01:17.791077707Z" level=info msg="CreateContainer within sandbox \"83ec1b159877a160b1a58a4b5bbc9751a354cfc28009b4a84422959ecda5ec8b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6de74b1ca159c5658a964cc5e67e81af1b9fe582daf7d39002addf21805470fe\"" May 14 00:01:17.791643 containerd[1478]: time="2025-05-14T00:01:17.791608372Z" level=info msg="StartContainer for \"6de74b1ca159c5658a964cc5e67e81af1b9fe582daf7d39002addf21805470fe\"" May 14 00:01:17.792685 containerd[1478]: time="2025-05-14T00:01:17.792645535Z" level=info msg="connecting to shim 6de74b1ca159c5658a964cc5e67e81af1b9fe582daf7d39002addf21805470fe" address="unix:///run/containerd/s/6bb32b7b813c1fd7b3ef6f1672c00503924d2449eff5ae4569a3915c1eda946a" protocol=ttrpc version=3 May 14 00:01:17.818559 systemd[1]: Started cri-containerd-6de74b1ca159c5658a964cc5e67e81af1b9fe582daf7d39002addf21805470fe.scope - libcontainer container 6de74b1ca159c5658a964cc5e67e81af1b9fe582daf7d39002addf21805470fe. May 14 00:01:17.857445 sshd[3304]: Connection closed by 10.0.0.1 port 59520 May 14 00:01:17.857823 sshd-session[3302]: pam_unix(sshd:session): session closed for user core May 14 00:01:17.862746 systemd[1]: sshd@7-10.0.0.146:22-10.0.0.1:59520.service: Deactivated successfully. May 14 00:01:17.866847 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:01:17.868040 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. May 14 00:01:17.869160 systemd-logind[1465]: Removed session 8. May 14 00:01:17.888329 kubelet[2706]: E0514 00:01:17.887373 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72zk5" podUID="22d9bc0e-3492-4477-b79c-f76bddcb0f2b" May 14 00:01:17.907127 containerd[1478]: time="2025-05-14T00:01:17.905652910Z" level=info msg="StartContainer for \"6de74b1ca159c5658a964cc5e67e81af1b9fe582daf7d39002addf21805470fe\" returns successfully" May 14 00:01:17.989581 kubelet[2706]: I0514 00:01:17.989509 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54d8f6f784-mwdtk" podStartSLOduration=1.677663959 podStartE2EDuration="4.989493852s" podCreationTimestamp="2025-05-14 00:01:13 +0000 UTC" firstStartedPulling="2025-05-14 00:01:14.449657945 +0000 UTC m=+25.643763662" lastFinishedPulling="2025-05-14 00:01:17.761487878 +0000 UTC m=+28.955593555" observedRunningTime="2025-05-14 00:01:17.988969309 +0000 UTC m=+29.183075026" watchObservedRunningTime="2025-05-14 00:01:17.989493852 +0000 UTC m=+29.183599569" May 14 00:01:18.066905 kubelet[2706]: E0514 00:01:18.066867 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.066905 kubelet[2706]: W0514 00:01:18.066892 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.066905 kubelet[2706]: E0514 00:01:18.066913 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.067219 kubelet[2706]: E0514 00:01:18.067204 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.067254 kubelet[2706]: W0514 00:01:18.067219 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.067254 kubelet[2706]: E0514 00:01:18.067238 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.067641 kubelet[2706]: E0514 00:01:18.067427 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.067641 kubelet[2706]: W0514 00:01:18.067439 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.067641 kubelet[2706]: E0514 00:01:18.067449 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.067641 kubelet[2706]: E0514 00:01:18.067606 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.067641 kubelet[2706]: W0514 00:01:18.067614 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.067641 kubelet[2706]: E0514 00:01:18.067622 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.067829 kubelet[2706]: E0514 00:01:18.067767 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.067829 kubelet[2706]: W0514 00:01:18.067776 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.067829 kubelet[2706]: E0514 00:01:18.067783 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.068047 kubelet[2706]: E0514 00:01:18.068023 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.068047 kubelet[2706]: W0514 00:01:18.068041 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.068114 kubelet[2706]: E0514 00:01:18.068052 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.068229 kubelet[2706]: E0514 00:01:18.068200 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.068229 kubelet[2706]: W0514 00:01:18.068219 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.068229 kubelet[2706]: E0514 00:01:18.068229 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.068402 kubelet[2706]: E0514 00:01:18.068386 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.068402 kubelet[2706]: W0514 00:01:18.068398 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.068473 kubelet[2706]: E0514 00:01:18.068407 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.068589 kubelet[2706]: E0514 00:01:18.068559 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.068589 kubelet[2706]: W0514 00:01:18.068577 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.068589 kubelet[2706]: E0514 00:01:18.068586 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.068742 kubelet[2706]: E0514 00:01:18.068712 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.068742 kubelet[2706]: W0514 00:01:18.068727 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.068742 kubelet[2706]: E0514 00:01:18.068736 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.068890 kubelet[2706]: E0514 00:01:18.068865 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.068890 kubelet[2706]: W0514 00:01:18.068881 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.068890 kubelet[2706]: E0514 00:01:18.068890 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.069063 kubelet[2706]: E0514 00:01:18.069036 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.069063 kubelet[2706]: W0514 00:01:18.069054 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.069063 kubelet[2706]: E0514 00:01:18.069064 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.069230 kubelet[2706]: E0514 00:01:18.069213 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.069230 kubelet[2706]: W0514 00:01:18.069225 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.069285 kubelet[2706]: E0514 00:01:18.069234 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.069441 kubelet[2706]: E0514 00:01:18.069421 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.069441 kubelet[2706]: W0514 00:01:18.069436 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.069503 kubelet[2706]: E0514 00:01:18.069445 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.069709 kubelet[2706]: E0514 00:01:18.069692 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.069709 kubelet[2706]: W0514 00:01:18.069706 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.069783 kubelet[2706]: E0514 00:01:18.069716 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.078016 kubelet[2706]: E0514 00:01:18.077995 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.078016 kubelet[2706]: W0514 00:01:18.078012 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.078094 kubelet[2706]: E0514 00:01:18.078026 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.078283 kubelet[2706]: E0514 00:01:18.078253 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.078283 kubelet[2706]: W0514 00:01:18.078265 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.078478 kubelet[2706]: E0514 00:01:18.078425 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.078726 kubelet[2706]: E0514 00:01:18.078695 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.078726 kubelet[2706]: W0514 00:01:18.078712 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.078726 kubelet[2706]: E0514 00:01:18.078727 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.079056 kubelet[2706]: E0514 00:01:18.079037 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.079056 kubelet[2706]: W0514 00:01:18.079055 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.079118 kubelet[2706]: E0514 00:01:18.079075 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.079323 kubelet[2706]: E0514 00:01:18.079306 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.079323 kubelet[2706]: W0514 00:01:18.079322 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.079411 kubelet[2706]: E0514 00:01:18.079336 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.079553 kubelet[2706]: E0514 00:01:18.079537 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.079553 kubelet[2706]: W0514 00:01:18.079552 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.079604 kubelet[2706]: E0514 00:01:18.079565 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.080701 kubelet[2706]: E0514 00:01:18.080637 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.080701 kubelet[2706]: W0514 00:01:18.080652 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.081004 kubelet[2706]: E0514 00:01:18.080798 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.081286 kubelet[2706]: E0514 00:01:18.081271 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.081376 kubelet[2706]: W0514 00:01:18.081350 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.081487 kubelet[2706]: E0514 00:01:18.081474 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.081693 kubelet[2706]: E0514 00:01:18.081681 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.081763 kubelet[2706]: W0514 00:01:18.081751 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.081869 kubelet[2706]: E0514 00:01:18.081843 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.082148 kubelet[2706]: E0514 00:01:18.082072 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.082148 kubelet[2706]: W0514 00:01:18.082083 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.082148 kubelet[2706]: E0514 00:01:18.082097 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.082471 kubelet[2706]: E0514 00:01:18.082403 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.082471 kubelet[2706]: W0514 00:01:18.082416 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.082471 kubelet[2706]: E0514 00:01:18.082435 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.082821 kubelet[2706]: E0514 00:01:18.082732 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.082821 kubelet[2706]: W0514 00:01:18.082745 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.082821 kubelet[2706]: E0514 00:01:18.082762 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.083154 kubelet[2706]: E0514 00:01:18.083062 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.083154 kubelet[2706]: W0514 00:01:18.083075 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.083154 kubelet[2706]: E0514 00:01:18.083092 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.083613 kubelet[2706]: E0514 00:01:18.083554 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.083613 kubelet[2706]: W0514 00:01:18.083568 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.083613 kubelet[2706]: E0514 00:01:18.083581 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.084269 kubelet[2706]: E0514 00:01:18.084121 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.084269 kubelet[2706]: W0514 00:01:18.084136 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.084269 kubelet[2706]: E0514 00:01:18.084151 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.084918 kubelet[2706]: E0514 00:01:18.084479 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.084918 kubelet[2706]: W0514 00:01:18.084491 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.084918 kubelet[2706]: E0514 00:01:18.084503 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.085113 kubelet[2706]: E0514 00:01:18.085100 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.085175 kubelet[2706]: W0514 00:01:18.085162 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.085266 kubelet[2706]: E0514 00:01:18.085244 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.085616 kubelet[2706]: E0514 00:01:18.085601 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:18.085707 kubelet[2706]: W0514 00:01:18.085694 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:18.085759 kubelet[2706]: E0514 00:01:18.085749 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:18.816406 containerd[1478]: time="2025-05-14T00:01:18.816337256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:18.817345 containerd[1478]: time="2025-05-14T00:01:18.817118301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 00:01:18.818333 containerd[1478]: time="2025-05-14T00:01:18.818103879Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:18.820115 containerd[1478]: time="2025-05-14T00:01:18.820056471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:18.822427 containerd[1478]: time="2025-05-14T00:01:18.820885369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.059184752s" May 14 00:01:18.822427 containerd[1478]: time="2025-05-14T00:01:18.820914336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 00:01:18.822925 containerd[1478]: time="2025-05-14T00:01:18.822887054Z" level=info msg="CreateContainer within sandbox \"1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 00:01:18.833095 containerd[1478]: time="2025-05-14T00:01:18.830385220Z" level=info msg="Container 862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:18.840320 containerd[1478]: time="2025-05-14T00:01:18.840257168Z" level=info msg="CreateContainer within sandbox \"1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc\"" May 14 00:01:18.841421 containerd[1478]: time="2025-05-14T00:01:18.840867088Z" level=info msg="StartContainer for \"862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc\"" May 14 00:01:18.842257 containerd[1478]: time="2025-05-14T00:01:18.842230966Z" level=info msg="connecting to shim 862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc" address="unix:///run/containerd/s/94068e28b5c92803faa32cab3d2e7711f971e17dc86bcfff6dec804c4f2a284e" protocol=ttrpc version=3 May 14 00:01:18.862571 systemd[1]: Started cri-containerd-862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc.scope - libcontainer container 862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc. May 14 00:01:18.917504 containerd[1478]: time="2025-05-14T00:01:18.917167535Z" level=info msg="StartContainer for \"862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc\" returns successfully" May 14 00:01:18.941575 systemd[1]: cri-containerd-862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc.scope: Deactivated successfully. May 14 00:01:18.964827 containerd[1478]: time="2025-05-14T00:01:18.964783461Z" level=info msg="received exit event container_id:\"862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc\" id:\"862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc\" pid:3416 exited_at:{seconds:1747180878 nanos:952847771}" May 14 00:01:18.964989 containerd[1478]: time="2025-05-14T00:01:18.964875245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc\" id:\"862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc\" pid:3416 exited_at:{seconds:1747180878 nanos:952847771}" May 14 00:01:19.005623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-862e3522ed8624be5a40596a274ff209067b46ad358990de693aab3b64d0d9bc-rootfs.mount: Deactivated successfully. May 14 00:01:19.887159 kubelet[2706]: E0514 00:01:19.887108 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72zk5" podUID="22d9bc0e-3492-4477-b79c-f76bddcb0f2b" May 14 00:01:19.987148 containerd[1478]: time="2025-05-14T00:01:19.987061433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 00:01:21.888547 kubelet[2706]: E0514 00:01:21.888178 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72zk5" podUID="22d9bc0e-3492-4477-b79c-f76bddcb0f2b" May 14 00:01:22.696741 containerd[1478]: time="2025-05-14T00:01:22.696697967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:22.697239 containerd[1478]: time="2025-05-14T00:01:22.697189558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 00:01:22.697875 containerd[1478]: time="2025-05-14T00:01:22.697851308Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:22.699561 containerd[1478]: time="2025-05-14T00:01:22.699531969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:22.700321 containerd[1478]: time="2025-05-14T00:01:22.700287780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.713181016s" May 14 00:01:22.700397 containerd[1478]: time="2025-05-14T00:01:22.700322388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 00:01:22.703399 containerd[1478]: time="2025-05-14T00:01:22.703344072Z" level=info msg="CreateContainer within sandbox \"1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:01:22.711671 containerd[1478]: time="2025-05-14T00:01:22.710536301Z" level=info msg="Container 05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:22.718118 containerd[1478]: time="2025-05-14T00:01:22.718073528Z" level=info msg="CreateContainer within sandbox \"1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46\"" May 14 00:01:22.720049 containerd[1478]: time="2025-05-14T00:01:22.718752962Z" level=info msg="StartContainer for \"05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46\"" May 14 00:01:22.721735 containerd[1478]: time="2025-05-14T00:01:22.721706951Z" level=info msg="connecting to shim 05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46" address="unix:///run/containerd/s/94068e28b5c92803faa32cab3d2e7711f971e17dc86bcfff6dec804c4f2a284e" protocol=ttrpc version=3 May 14 00:01:22.741588 systemd[1]: Started cri-containerd-05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46.scope - libcontainer container 05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46. May 14 00:01:22.780311 containerd[1478]: time="2025-05-14T00:01:22.777627576Z" level=info msg="StartContainer for \"05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46\" returns successfully" May 14 00:01:22.873674 systemd[1]: Started sshd@8-10.0.0.146:22-10.0.0.1:39806.service - OpenSSH per-connection server daemon (10.0.0.1:39806). May 14 00:01:23.048201 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 39806 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:23.050042 sshd-session[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:23.057255 systemd-logind[1465]: New session 9 of user core. May 14 00:01:23.066584 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:01:23.191928 sshd[3498]: Connection closed by 10.0.0.1 port 39806 May 14 00:01:23.192103 sshd-session[3495]: pam_unix(sshd:session): session closed for user core May 14 00:01:23.197096 systemd[1]: sshd@8-10.0.0.146:22-10.0.0.1:39806.service: Deactivated successfully. May 14 00:01:23.199437 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:01:23.201710 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. May 14 00:01:23.202779 systemd-logind[1465]: Removed session 9. May 14 00:01:23.483093 systemd[1]: cri-containerd-05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46.scope: Deactivated successfully. May 14 00:01:23.483657 systemd[1]: cri-containerd-05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46.scope: Consumed 488ms CPU time, 160M memory peak, 4K read from disk, 150.3M written to disk. May 14 00:01:23.486619 containerd[1478]: time="2025-05-14T00:01:23.486585218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46\" id:\"05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46\" pid:3476 exited_at:{seconds:1747180883 nanos:485555992}" May 14 00:01:23.486970 containerd[1478]: time="2025-05-14T00:01:23.486617985Z" level=info msg="received exit event container_id:\"05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46\" id:\"05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46\" pid:3476 exited_at:{seconds:1747180883 nanos:485555992}" May 14 00:01:23.504724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05ca77b84f0ff22a4821201fe3138bfecd2b5ce838cc9cb403bdef5fe132de46-rootfs.mount: Deactivated successfully. May 14 00:01:23.552276 kubelet[2706]: I0514 00:01:23.552246 2706 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:01:23.596885 kubelet[2706]: I0514 00:01:23.596833 2706 topology_manager.go:215] "Topology Admit Handler" podUID="7da5a975-f73e-4520-af3e-1620cd792c56" podNamespace="kube-system" podName="coredns-7db6d8ff4d-thdkj" May 14 00:01:23.612314 kubelet[2706]: I0514 00:01:23.611243 2706 topology_manager.go:215] "Topology Admit Handler" podUID="c01206de-1dda-4f2c-824b-aa2a43af3da7" podNamespace="calico-system" podName="calico-kube-controllers-688cf4f594-2dlhr" May 14 00:01:23.612509 kubelet[2706]: I0514 00:01:23.612481 2706 topology_manager.go:215] "Topology Admit Handler" podUID="2ac79996-c5f6-4170-831a-5c4b21085384" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xswn6" May 14 00:01:23.614194 kubelet[2706]: I0514 00:01:23.614007 2706 topology_manager.go:215] "Topology Admit Handler" podUID="89073857-5d38-4c76-b9e5-8939dfb86770" podNamespace="calico-apiserver" podName="calico-apiserver-6569c5bb96-gk456" May 14 00:01:23.614194 kubelet[2706]: I0514 00:01:23.614091 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgspg\" (UniqueName: \"kubernetes.io/projected/7da5a975-f73e-4520-af3e-1620cd792c56-kube-api-access-fgspg\") pod \"coredns-7db6d8ff4d-thdkj\" (UID: \"7da5a975-f73e-4520-af3e-1620cd792c56\") " pod="kube-system/coredns-7db6d8ff4d-thdkj" May 14 00:01:23.614194 kubelet[2706]: I0514 00:01:23.614131 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7da5a975-f73e-4520-af3e-1620cd792c56-config-volume\") pod \"coredns-7db6d8ff4d-thdkj\" (UID: \"7da5a975-f73e-4520-af3e-1620cd792c56\") " pod="kube-system/coredns-7db6d8ff4d-thdkj" May 14 00:01:23.614194 kubelet[2706]: I0514 00:01:23.614135 2706 topology_manager.go:215] "Topology Admit Handler" podUID="aa9357b4-acf1-400c-abd6-381a0f8fc5e9" podNamespace="calico-apiserver" podName="calico-apiserver-6569c5bb96-7w2lt" May 14 00:01:23.626400 systemd[1]: Created slice kubepods-besteffort-podc01206de_1dda_4f2c_824b_aa2a43af3da7.slice - libcontainer container kubepods-besteffort-podc01206de_1dda_4f2c_824b_aa2a43af3da7.slice. May 14 00:01:23.637325 systemd[1]: Created slice kubepods-besteffort-podaa9357b4_acf1_400c_abd6_381a0f8fc5e9.slice - libcontainer container kubepods-besteffort-podaa9357b4_acf1_400c_abd6_381a0f8fc5e9.slice. May 14 00:01:23.645301 systemd[1]: Created slice kubepods-burstable-pod7da5a975_f73e_4520_af3e_1620cd792c56.slice - libcontainer container kubepods-burstable-pod7da5a975_f73e_4520_af3e_1620cd792c56.slice. May 14 00:01:23.653629 systemd[1]: Created slice kubepods-burstable-pod2ac79996_c5f6_4170_831a_5c4b21085384.slice - libcontainer container kubepods-burstable-pod2ac79996_c5f6_4170_831a_5c4b21085384.slice. May 14 00:01:23.658281 systemd[1]: Created slice kubepods-besteffort-pod89073857_5d38_4c76_b9e5_8939dfb86770.slice - libcontainer container kubepods-besteffort-pod89073857_5d38_4c76_b9e5_8939dfb86770.slice. May 14 00:01:23.816119 kubelet[2706]: I0514 00:01:23.815995 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ac79996-c5f6-4170-831a-5c4b21085384-config-volume\") pod \"coredns-7db6d8ff4d-xswn6\" (UID: \"2ac79996-c5f6-4170-831a-5c4b21085384\") " pod="kube-system/coredns-7db6d8ff4d-xswn6" May 14 00:01:23.816119 kubelet[2706]: I0514 00:01:23.816046 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw4sx\" (UniqueName: \"kubernetes.io/projected/2ac79996-c5f6-4170-831a-5c4b21085384-kube-api-access-mw4sx\") pod \"coredns-7db6d8ff4d-xswn6\" (UID: \"2ac79996-c5f6-4170-831a-5c4b21085384\") " pod="kube-system/coredns-7db6d8ff4d-xswn6" May 14 00:01:23.816119 kubelet[2706]: I0514 00:01:23.816069 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89073857-5d38-4c76-b9e5-8939dfb86770-calico-apiserver-certs\") pod \"calico-apiserver-6569c5bb96-gk456\" (UID: \"89073857-5d38-4c76-b9e5-8939dfb86770\") " pod="calico-apiserver/calico-apiserver-6569c5bb96-gk456" May 14 00:01:23.816119 kubelet[2706]: I0514 00:01:23.816091 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c01206de-1dda-4f2c-824b-aa2a43af3da7-tigera-ca-bundle\") pod \"calico-kube-controllers-688cf4f594-2dlhr\" (UID: \"c01206de-1dda-4f2c-824b-aa2a43af3da7\") " pod="calico-system/calico-kube-controllers-688cf4f594-2dlhr" May 14 00:01:23.816119 kubelet[2706]: I0514 00:01:23.816121 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45g9\" (UniqueName: \"kubernetes.io/projected/aa9357b4-acf1-400c-abd6-381a0f8fc5e9-kube-api-access-b45g9\") pod \"calico-apiserver-6569c5bb96-7w2lt\" (UID: \"aa9357b4-acf1-400c-abd6-381a0f8fc5e9\") " pod="calico-apiserver/calico-apiserver-6569c5bb96-7w2lt" May 14 00:01:23.819309 kubelet[2706]: I0514 00:01:23.816141 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvnz\" (UniqueName: \"kubernetes.io/projected/c01206de-1dda-4f2c-824b-aa2a43af3da7-kube-api-access-nwvnz\") pod \"calico-kube-controllers-688cf4f594-2dlhr\" (UID: \"c01206de-1dda-4f2c-824b-aa2a43af3da7\") " pod="calico-system/calico-kube-controllers-688cf4f594-2dlhr" May 14 00:01:23.819358 kubelet[2706]: I0514 00:01:23.819332 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhnfx\" (UniqueName: \"kubernetes.io/projected/89073857-5d38-4c76-b9e5-8939dfb86770-kube-api-access-hhnfx\") pod \"calico-apiserver-6569c5bb96-gk456\" (UID: \"89073857-5d38-4c76-b9e5-8939dfb86770\") " pod="calico-apiserver/calico-apiserver-6569c5bb96-gk456" May 14 00:01:23.819405 kubelet[2706]: I0514 00:01:23.819375 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa9357b4-acf1-400c-abd6-381a0f8fc5e9-calico-apiserver-certs\") pod \"calico-apiserver-6569c5bb96-7w2lt\" (UID: \"aa9357b4-acf1-400c-abd6-381a0f8fc5e9\") " pod="calico-apiserver/calico-apiserver-6569c5bb96-7w2lt" May 14 00:01:23.892973 systemd[1]: Created slice kubepods-besteffort-pod22d9bc0e_3492_4477_b79c_f76bddcb0f2b.slice - libcontainer container kubepods-besteffort-pod22d9bc0e_3492_4477_b79c_f76bddcb0f2b.slice. May 14 00:01:23.895259 containerd[1478]: time="2025-05-14T00:01:23.895217148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-72zk5,Uid:22d9bc0e-3492-4477-b79c-f76bddcb0f2b,Namespace:calico-system,Attempt:0,}" May 14 00:01:23.955389 containerd[1478]: time="2025-05-14T00:01:23.955334868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-thdkj,Uid:7da5a975-f73e-4520-af3e-1620cd792c56,Namespace:kube-system,Attempt:0,}" May 14 00:01:23.958581 containerd[1478]: time="2025-05-14T00:01:23.957614607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xswn6,Uid:2ac79996-c5f6-4170-831a-5c4b21085384,Namespace:kube-system,Attempt:0,}" May 14 00:01:23.967805 containerd[1478]: time="2025-05-14T00:01:23.967479846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6569c5bb96-gk456,Uid:89073857-5d38-4c76-b9e5-8939dfb86770,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:24.020467 containerd[1478]: time="2025-05-14T00:01:24.019468729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 00:01:24.239926 containerd[1478]: time="2025-05-14T00:01:24.239886332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688cf4f594-2dlhr,Uid:c01206de-1dda-4f2c-824b-aa2a43af3da7,Namespace:calico-system,Attempt:0,}" May 14 00:01:24.247590 containerd[1478]: time="2025-05-14T00:01:24.247558517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6569c5bb96-7w2lt,Uid:aa9357b4-acf1-400c-abd6-381a0f8fc5e9,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:24.287549 containerd[1478]: time="2025-05-14T00:01:24.287490895Z" level=error msg="Failed to destroy network for sandbox \"aa4c02e891bec8641157578013b148ce640a056f8df2701ec355058f0f58e788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.289043 containerd[1478]: time="2025-05-14T00:01:24.287502457Z" level=error msg="Failed to destroy network for sandbox \"eb0c183cc38e5d44574ea01ba9b0eaa90993b772eb77d3f36a62fde7293912a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.289043 containerd[1478]: time="2025-05-14T00:01:24.288787689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6569c5bb96-gk456,Uid:89073857-5d38-4c76-b9e5-8939dfb86770,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb0c183cc38e5d44574ea01ba9b0eaa90993b772eb77d3f36a62fde7293912a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.289171 kubelet[2706]: E0514 00:01:24.289119 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb0c183cc38e5d44574ea01ba9b0eaa90993b772eb77d3f36a62fde7293912a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.289218 kubelet[2706]: E0514 00:01:24.289197 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb0c183cc38e5d44574ea01ba9b0eaa90993b772eb77d3f36a62fde7293912a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6569c5bb96-gk456" May 14 00:01:24.289281 kubelet[2706]: E0514 00:01:24.289216 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb0c183cc38e5d44574ea01ba9b0eaa90993b772eb77d3f36a62fde7293912a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6569c5bb96-gk456" May 14 00:01:24.289322 kubelet[2706]: E0514 00:01:24.289294 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6569c5bb96-gk456_calico-apiserver(89073857-5d38-4c76-b9e5-8939dfb86770)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6569c5bb96-gk456_calico-apiserver(89073857-5d38-4c76-b9e5-8939dfb86770)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb0c183cc38e5d44574ea01ba9b0eaa90993b772eb77d3f36a62fde7293912a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6569c5bb96-gk456" podUID="89073857-5d38-4c76-b9e5-8939dfb86770" May 14 00:01:24.291537 containerd[1478]: time="2025-05-14T00:01:24.291488101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-thdkj,Uid:7da5a975-f73e-4520-af3e-1620cd792c56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa4c02e891bec8641157578013b148ce640a056f8df2701ec355058f0f58e788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.291704 kubelet[2706]: E0514 00:01:24.291654 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa4c02e891bec8641157578013b148ce640a056f8df2701ec355058f0f58e788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.291748 kubelet[2706]: E0514 00:01:24.291718 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa4c02e891bec8641157578013b148ce640a056f8df2701ec355058f0f58e788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-thdkj" May 14 00:01:24.291748 kubelet[2706]: E0514 00:01:24.291739 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa4c02e891bec8641157578013b148ce640a056f8df2701ec355058f0f58e788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-thdkj" May 14 00:01:24.291815 kubelet[2706]: E0514 00:01:24.291788 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-thdkj_kube-system(7da5a975-f73e-4520-af3e-1620cd792c56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-thdkj_kube-system(7da5a975-f73e-4520-af3e-1620cd792c56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa4c02e891bec8641157578013b148ce640a056f8df2701ec355058f0f58e788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-thdkj" podUID="7da5a975-f73e-4520-af3e-1620cd792c56" May 14 00:01:24.298707 containerd[1478]: time="2025-05-14T00:01:24.298658940Z" level=error msg="Failed to destroy network for sandbox \"140224eb1454de5ceb5402775905990da90bac435957f6a873e8150f2edc564e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.300252 containerd[1478]: time="2025-05-14T00:01:24.300210029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xswn6,Uid:2ac79996-c5f6-4170-831a-5c4b21085384,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"140224eb1454de5ceb5402775905990da90bac435957f6a873e8150f2edc564e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.300482 kubelet[2706]: E0514 00:01:24.300423 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"140224eb1454de5ceb5402775905990da90bac435957f6a873e8150f2edc564e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.300537 kubelet[2706]: E0514 00:01:24.300499 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"140224eb1454de5ceb5402775905990da90bac435957f6a873e8150f2edc564e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xswn6" May 14 00:01:24.300537 kubelet[2706]: E0514 00:01:24.300521 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"140224eb1454de5ceb5402775905990da90bac435957f6a873e8150f2edc564e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xswn6" May 14 00:01:24.300585 kubelet[2706]: E0514 00:01:24.300561 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xswn6_kube-system(2ac79996-c5f6-4170-831a-5c4b21085384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xswn6_kube-system(2ac79996-c5f6-4170-831a-5c4b21085384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"140224eb1454de5ceb5402775905990da90bac435957f6a873e8150f2edc564e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xswn6" podUID="2ac79996-c5f6-4170-831a-5c4b21085384" May 14 00:01:24.306386 containerd[1478]: time="2025-05-14T00:01:24.306329805Z" level=error msg="Failed to destroy network for sandbox \"51f59c343c30f42fd20e7e3b654b88ea45fbed67c1d02c4bd0698134ab9136b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.308538 containerd[1478]: time="2025-05-14T00:01:24.307996358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-72zk5,Uid:22d9bc0e-3492-4477-b79c-f76bddcb0f2b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f59c343c30f42fd20e7e3b654b88ea45fbed67c1d02c4bd0698134ab9136b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.308678 kubelet[2706]: E0514 00:01:24.308203 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f59c343c30f42fd20e7e3b654b88ea45fbed67c1d02c4bd0698134ab9136b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.308678 kubelet[2706]: E0514 00:01:24.308251 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f59c343c30f42fd20e7e3b654b88ea45fbed67c1d02c4bd0698134ab9136b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-72zk5" May 14 00:01:24.308678 kubelet[2706]: E0514 00:01:24.308274 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f59c343c30f42fd20e7e3b654b88ea45fbed67c1d02c4bd0698134ab9136b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-72zk5" May 14 00:01:24.308767 kubelet[2706]: E0514 00:01:24.308310 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-72zk5_calico-system(22d9bc0e-3492-4477-b79c-f76bddcb0f2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-72zk5_calico-system(22d9bc0e-3492-4477-b79c-f76bddcb0f2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51f59c343c30f42fd20e7e3b654b88ea45fbed67c1d02c4bd0698134ab9136b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72zk5" podUID="22d9bc0e-3492-4477-b79c-f76bddcb0f2b" May 14 00:01:24.327313 containerd[1478]: time="2025-05-14T00:01:24.327261398Z" level=error msg="Failed to destroy network for sandbox \"39f64ae6419f773275064ed340afeab145cc8cd4bb3ebd502dd71c0f836bb0b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.328358 containerd[1478]: time="2025-05-14T00:01:24.328327544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688cf4f594-2dlhr,Uid:c01206de-1dda-4f2c-824b-aa2a43af3da7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f64ae6419f773275064ed340afeab145cc8cd4bb3ebd502dd71c0f836bb0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.328605 kubelet[2706]: E0514 00:01:24.328565 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f64ae6419f773275064ed340afeab145cc8cd4bb3ebd502dd71c0f836bb0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.328662 kubelet[2706]: E0514 00:01:24.328622 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f64ae6419f773275064ed340afeab145cc8cd4bb3ebd502dd71c0f836bb0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688cf4f594-2dlhr" May 14 00:01:24.328662 kubelet[2706]: E0514 00:01:24.328640 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f64ae6419f773275064ed340afeab145cc8cd4bb3ebd502dd71c0f836bb0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688cf4f594-2dlhr" May 14 00:01:24.328728 kubelet[2706]: E0514 00:01:24.328682 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-688cf4f594-2dlhr_calico-system(c01206de-1dda-4f2c-824b-aa2a43af3da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-688cf4f594-2dlhr_calico-system(c01206de-1dda-4f2c-824b-aa2a43af3da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39f64ae6419f773275064ed340afeab145cc8cd4bb3ebd502dd71c0f836bb0b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-688cf4f594-2dlhr" podUID="c01206de-1dda-4f2c-824b-aa2a43af3da7" May 14 00:01:24.333724 containerd[1478]: time="2025-05-14T00:01:24.333664874Z" level=error msg="Failed to destroy network for sandbox \"1daee9f5b592ed1da94a1e067e5d5d4861c7bbc68313c1403e7e55976d5490cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.334613 containerd[1478]: time="2025-05-14T00:01:24.334571266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6569c5bb96-7w2lt,Uid:aa9357b4-acf1-400c-abd6-381a0f8fc5e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daee9f5b592ed1da94a1e067e5d5d4861c7bbc68313c1403e7e55976d5490cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.334796 kubelet[2706]: E0514 00:01:24.334754 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daee9f5b592ed1da94a1e067e5d5d4861c7bbc68313c1403e7e55976d5490cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:24.334842 kubelet[2706]: E0514 00:01:24.334803 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daee9f5b592ed1da94a1e067e5d5d4861c7bbc68313c1403e7e55976d5490cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6569c5bb96-7w2lt" May 14 00:01:24.334842 kubelet[2706]: E0514 00:01:24.334824 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daee9f5b592ed1da94a1e067e5d5d4861c7bbc68313c1403e7e55976d5490cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6569c5bb96-7w2lt" May 14 00:01:24.334894 kubelet[2706]: E0514 00:01:24.334856 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6569c5bb96-7w2lt_calico-apiserver(aa9357b4-acf1-400c-abd6-381a0f8fc5e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6569c5bb96-7w2lt_calico-apiserver(aa9357b4-acf1-400c-abd6-381a0f8fc5e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1daee9f5b592ed1da94a1e067e5d5d4861c7bbc68313c1403e7e55976d5490cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6569c5bb96-7w2lt" podUID="aa9357b4-acf1-400c-abd6-381a0f8fc5e9" May 14 00:01:24.728737 systemd[1]: run-netns-cni\x2d2952b5c0\x2d5380\x2d0b5a\x2db1f4\x2d1162ff883af0.mount: Deactivated successfully. May 14 00:01:28.003865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437066908.mount: Deactivated successfully. May 14 00:01:28.205012 systemd[1]: Started sshd@9-10.0.0.146:22-10.0.0.1:39820.service - OpenSSH per-connection server daemon (10.0.0.1:39820). May 14 00:01:28.249458 containerd[1478]: time="2025-05-14T00:01:28.249405550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:28.249869 containerd[1478]: time="2025-05-14T00:01:28.249812066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 00:01:28.252090 containerd[1478]: time="2025-05-14T00:01:28.252054607Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:28.253719 containerd[1478]: time="2025-05-14T00:01:28.253665429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:28.254812 containerd[1478]: time="2025-05-14T00:01:28.254214052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.234704474s" May 14 00:01:28.254812 containerd[1478]: time="2025-05-14T00:01:28.254251899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 00:01:28.265062 containerd[1478]: time="2025-05-14T00:01:28.265007196Z" level=info msg="CreateContainer within sandbox \"1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 00:01:28.293259 sshd[3761]: Accepted publickey for core from 10.0.0.1 port 39820 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:28.295137 sshd-session[3761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:28.300717 systemd-logind[1465]: New session 10 of user core. May 14 00:01:28.308537 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:01:28.320999 containerd[1478]: time="2025-05-14T00:01:28.320686677Z" level=info msg="Container 79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:28.340186 containerd[1478]: time="2025-05-14T00:01:28.340129724Z" level=info msg="CreateContainer within sandbox \"1e905b210deac866b7ac47d807d8c8e079379d0a14f1b951abe2e465fec7952e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db\"" May 14 00:01:28.340890 containerd[1478]: time="2025-05-14T00:01:28.340829175Z" level=info msg="StartContainer for \"79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db\"" May 14 00:01:28.342585 containerd[1478]: time="2025-05-14T00:01:28.342561540Z" level=info msg="connecting to shim 79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db" address="unix:///run/containerd/s/94068e28b5c92803faa32cab3d2e7711f971e17dc86bcfff6dec804c4f2a284e" protocol=ttrpc version=3 May 14 00:01:28.361551 systemd[1]: Started cri-containerd-79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db.scope - libcontainer container 79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db. May 14 00:01:28.411108 containerd[1478]: time="2025-05-14T00:01:28.411070988Z" level=info msg="StartContainer for \"79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db\" returns successfully" May 14 00:01:28.442174 sshd[3765]: Connection closed by 10.0.0.1 port 39820 May 14 00:01:28.442574 sshd-session[3761]: pam_unix(sshd:session): session closed for user core May 14 00:01:28.454962 systemd[1]: sshd@9-10.0.0.146:22-10.0.0.1:39820.service: Deactivated successfully. May 14 00:01:28.456906 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:01:28.458815 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. May 14 00:01:28.460967 systemd[1]: Started sshd@10-10.0.0.146:22-10.0.0.1:39834.service - OpenSSH per-connection server daemon (10.0.0.1:39834). May 14 00:01:28.462789 systemd-logind[1465]: Removed session 10. May 14 00:01:28.515984 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 39834 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:28.517539 sshd-session[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:28.525788 systemd-logind[1465]: New session 11 of user core. May 14 00:01:28.537762 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:01:28.619141 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 00:01:28.619266 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 00:01:28.716918 sshd[3814]: Connection closed by 10.0.0.1 port 39834 May 14 00:01:28.717601 sshd-session[3806]: pam_unix(sshd:session): session closed for user core May 14 00:01:28.727952 systemd[1]: sshd@10-10.0.0.146:22-10.0.0.1:39834.service: Deactivated successfully. May 14 00:01:28.734592 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:01:28.739794 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. May 14 00:01:28.744712 systemd[1]: Started sshd@11-10.0.0.146:22-10.0.0.1:39838.service - OpenSSH per-connection server daemon (10.0.0.1:39838). May 14 00:01:28.748313 systemd-logind[1465]: Removed session 11. May 14 00:01:28.818511 sshd[3841]: Accepted publickey for core from 10.0.0.1 port 39838 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:28.821055 sshd-session[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:28.829567 systemd-logind[1465]: New session 12 of user core. May 14 00:01:28.837546 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:01:28.960416 sshd[3846]: Connection closed by 10.0.0.1 port 39838 May 14 00:01:28.960957 sshd-session[3841]: pam_unix(sshd:session): session closed for user core May 14 00:01:28.964359 systemd[1]: sshd@11-10.0.0.146:22-10.0.0.1:39838.service: Deactivated successfully. May 14 00:01:28.966244 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:01:28.967019 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. May 14 00:01:28.967873 systemd-logind[1465]: Removed session 12. May 14 00:01:29.034217 kubelet[2706]: I0514 00:01:29.034153 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fllxf" podStartSLOduration=1.337658687 podStartE2EDuration="15.034135224s" podCreationTimestamp="2025-05-14 00:01:14 +0000 UTC" firstStartedPulling="2025-05-14 00:01:14.558328986 +0000 UTC m=+25.752434663" lastFinishedPulling="2025-05-14 00:01:28.254805483 +0000 UTC m=+39.448911200" observedRunningTime="2025-05-14 00:01:29.033842851 +0000 UTC m=+40.227948608" watchObservedRunningTime="2025-05-14 00:01:29.034135224 +0000 UTC m=+40.228240941" May 14 00:01:30.093385 kubelet[2706]: I0514 00:01:30.090679 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:30.177395 kernel: bpftool[3997]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 00:01:30.316788 containerd[1478]: time="2025-05-14T00:01:30.316746191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db\" id:\"3e4d5762bdc492fd2cc05f97441b788ac67ccc736be8cc96db9eaeca6958feed\" pid:4010 exit_status:1 exited_at:{seconds:1747180890 nanos:310058923}" May 14 00:01:30.357187 systemd-networkd[1398]: vxlan.calico: Link UP May 14 00:01:30.357503 systemd-networkd[1398]: vxlan.calico: Gained carrier May 14 00:01:30.407603 containerd[1478]: time="2025-05-14T00:01:30.407564556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db\" id:\"5a8cb602becc71d2b0d090cd5c496abaeec7bb3394fa99dca3a3959b7b2691ed\" pid:4052 exit_status:1 exited_at:{seconds:1747180890 nanos:407177687}" May 14 00:01:31.087290 containerd[1478]: time="2025-05-14T00:01:31.087242971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db\" id:\"fb8c8c2c3130715106a1b665837a49482946795f30ae27b06865ebc9d56f7ba3\" pid:4134 exit_status:1 exited_at:{seconds:1747180891 nanos:86969444}" May 14 00:01:32.419552 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL May 14 00:01:33.981944 systemd[1]: Started sshd@12-10.0.0.146:22-10.0.0.1:53660.service - OpenSSH per-connection server daemon (10.0.0.1:53660). May 14 00:01:34.040394 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 53660 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:34.042229 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:34.046854 systemd-logind[1465]: New session 13 of user core. May 14 00:01:34.058290 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:01:34.205509 sshd[4151]: Connection closed by 10.0.0.1 port 53660 May 14 00:01:34.205940 sshd-session[4149]: pam_unix(sshd:session): session closed for user core May 14 00:01:34.209473 systemd[1]: sshd@12-10.0.0.146:22-10.0.0.1:53660.service: Deactivated successfully. May 14 00:01:34.211027 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:01:34.211987 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. May 14 00:01:34.213667 systemd-logind[1465]: Removed session 13. May 14 00:01:34.891477 containerd[1478]: time="2025-05-14T00:01:34.890438202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xswn6,Uid:2ac79996-c5f6-4170-831a-5c4b21085384,Namespace:kube-system,Attempt:0,}" May 14 00:01:34.891477 containerd[1478]: time="2025-05-14T00:01:34.890476368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-72zk5,Uid:22d9bc0e-3492-4477-b79c-f76bddcb0f2b,Namespace:calico-system,Attempt:0,}" May 14 00:01:35.156051 systemd-networkd[1398]: cali59b9575d0be: Link UP May 14 00:01:35.157020 systemd-networkd[1398]: cali59b9575d0be: Gained carrier May 14 00:01:35.185848 containerd[1478]: 2025-05-14 00:01:34.964 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--72zk5-eth0 csi-node-driver- calico-system 22d9bc0e-3492-4477-b79c-f76bddcb0f2b 610 0 2025-05-14 00:01:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-72zk5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali59b9575d0be [] []}} ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Namespace="calico-system" Pod="csi-node-driver-72zk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--72zk5-" May 14 00:01:35.185848 containerd[1478]: 2025-05-14 00:01:34.966 [INFO][4175] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Namespace="calico-system" Pod="csi-node-driver-72zk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--72zk5-eth0" May 14 00:01:35.185848 containerd[1478]: 2025-05-14 00:01:35.087 [INFO][4195] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" HandleID="k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Workload="localhost-k8s-csi--node--driver--72zk5-eth0" May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.102 [INFO][4195] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" HandleID="k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Workload="localhost-k8s-csi--node--driver--72zk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-72zk5", "timestamp":"2025-05-14 00:01:35.087107863 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.102 [INFO][4195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.102 [INFO][4195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.102 [INFO][4195] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.104 [INFO][4195] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" host="localhost" May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.111 [INFO][4195] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.115 [INFO][4195] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.117 [INFO][4195] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.120 [INFO][4195] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:35.186083 containerd[1478]: 2025-05-14 00:01:35.120 [INFO][4195] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" host="localhost" May 14 00:01:35.186288 containerd[1478]: 2025-05-14 00:01:35.121 [INFO][4195] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94 May 14 00:01:35.186288 containerd[1478]: 2025-05-14 00:01:35.135 [INFO][4195] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" host="localhost" May 14 00:01:35.186288 containerd[1478]: 2025-05-14 00:01:35.147 [INFO][4195] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" host="localhost" May 14 00:01:35.186288 containerd[1478]: 2025-05-14 00:01:35.147 [INFO][4195] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" host="localhost" May 14 00:01:35.186288 containerd[1478]: 2025-05-14 00:01:35.147 [INFO][4195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:35.186288 containerd[1478]: 2025-05-14 00:01:35.147 [INFO][4195] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" HandleID="k8s-pod-network.2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Workload="localhost-k8s-csi--node--driver--72zk5-eth0" May 14 00:01:35.186430 containerd[1478]: 2025-05-14 00:01:35.149 [INFO][4175] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Namespace="calico-system" Pod="csi-node-driver-72zk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--72zk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--72zk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22d9bc0e-3492-4477-b79c-f76bddcb0f2b", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-72zk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59b9575d0be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:35.186430 containerd[1478]: 2025-05-14 00:01:35.149 [INFO][4175] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Namespace="calico-system" Pod="csi-node-driver-72zk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--72zk5-eth0" May 14 00:01:35.186499 containerd[1478]: 2025-05-14 00:01:35.150 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59b9575d0be ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Namespace="calico-system" Pod="csi-node-driver-72zk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--72zk5-eth0" May 14 00:01:35.186499 containerd[1478]: 2025-05-14 00:01:35.157 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Namespace="calico-system" Pod="csi-node-driver-72zk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--72zk5-eth0" May 14 00:01:35.186541 containerd[1478]: 2025-05-14 00:01:35.158 [INFO][4175] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Namespace="calico-system" Pod="csi-node-driver-72zk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--72zk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--72zk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22d9bc0e-3492-4477-b79c-f76bddcb0f2b", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94", Pod:"csi-node-driver-72zk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59b9575d0be", MAC:"56:c1:38:11:6c:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:35.186589 containerd[1478]: 2025-05-14 00:01:35.183 [INFO][4175] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" Namespace="calico-system" Pod="csi-node-driver-72zk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--72zk5-eth0" May 14 00:01:35.222821 systemd-networkd[1398]: cali3095cbb9336: Link UP May 14 00:01:35.223203 systemd-networkd[1398]: cali3095cbb9336: Gained carrier May 14 00:01:35.243795 containerd[1478]: 2025-05-14 00:01:34.963 [INFO][4165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0 coredns-7db6d8ff4d- kube-system 2ac79996-c5f6-4170-831a-5c4b21085384 749 0 2025-05-14 00:01:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-xswn6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3095cbb9336 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xswn6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xswn6-" May 14 00:01:35.243795 containerd[1478]: 2025-05-14 00:01:34.963 [INFO][4165] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xswn6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" May 14 00:01:35.243795 containerd[1478]: 2025-05-14 00:01:35.087 [INFO][4193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" HandleID="k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Workload="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.104 [INFO][4193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" HandleID="k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Workload="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-xswn6", "timestamp":"2025-05-14 00:01:35.087112304 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.104 [INFO][4193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.147 [INFO][4193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.147 [INFO][4193] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.150 [INFO][4193] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" host="localhost" May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.157 [INFO][4193] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.163 [INFO][4193] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.166 [INFO][4193] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.183 [INFO][4193] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:35.244001 containerd[1478]: 2025-05-14 00:01:35.183 [INFO][4193] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" host="localhost" May 14 00:01:35.244206 containerd[1478]: 2025-05-14 00:01:35.185 [INFO][4193] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154 May 14 00:01:35.244206 containerd[1478]: 2025-05-14 00:01:35.203 [INFO][4193] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" host="localhost" May 14 00:01:35.244206 containerd[1478]: 2025-05-14 00:01:35.218 [INFO][4193] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" host="localhost" May 14 00:01:35.244206 containerd[1478]: 2025-05-14 00:01:35.218 [INFO][4193] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" host="localhost" May 14 00:01:35.244206 containerd[1478]: 2025-05-14 00:01:35.218 [INFO][4193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:35.244206 containerd[1478]: 2025-05-14 00:01:35.218 [INFO][4193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" HandleID="k8s-pod-network.e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Workload="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" May 14 00:01:35.244334 containerd[1478]: 2025-05-14 00:01:35.221 [INFO][4165] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xswn6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2ac79996-c5f6-4170-831a-5c4b21085384", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-xswn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3095cbb9336", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:35.244422 containerd[1478]: 2025-05-14 00:01:35.221 [INFO][4165] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xswn6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" May 14 00:01:35.244422 containerd[1478]: 2025-05-14 00:01:35.221 [INFO][4165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3095cbb9336 ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xswn6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" May 14 00:01:35.244422 containerd[1478]: 2025-05-14 00:01:35.222 [INFO][4165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xswn6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" May 14 00:01:35.244690 containerd[1478]: 2025-05-14 00:01:35.223 [INFO][4165] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xswn6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2ac79996-c5f6-4170-831a-5c4b21085384", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154", Pod:"coredns-7db6d8ff4d-xswn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3095cbb9336", MAC:"2a:ad:ec:92:ee:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:35.244690 containerd[1478]: 2025-05-14 00:01:35.239 [INFO][4165] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xswn6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xswn6-eth0" May 14 00:01:35.293548 containerd[1478]: time="2025-05-14T00:01:35.293038316Z" level=info msg="connecting to shim e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154" address="unix:///run/containerd/s/cf11cad6be3827a9f1bf6f0c031f8c07e95f857377fd06920533bbc2134c4d7b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:35.293548 containerd[1478]: time="2025-05-14T00:01:35.293040796Z" level=info msg="connecting to shim 2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94" address="unix:///run/containerd/s/ada55b3cb42a2a43b456533b666788f8cee7584fcfc725f9fb998daf9c3779f0" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:35.316572 systemd[1]: Started cri-containerd-2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94.scope - libcontainer container 2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94. May 14 00:01:35.321148 systemd[1]: Started cri-containerd-e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154.scope - libcontainer container e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154. May 14 00:01:35.330443 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:35.335303 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:35.359018 containerd[1478]: time="2025-05-14T00:01:35.358984428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xswn6,Uid:2ac79996-c5f6-4170-831a-5c4b21085384,Namespace:kube-system,Attempt:0,} returns sandbox id \"e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154\"" May 14 00:01:35.359970 containerd[1478]: time="2025-05-14T00:01:35.359933938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-72zk5,Uid:22d9bc0e-3492-4477-b79c-f76bddcb0f2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94\"" May 14 00:01:35.361006 containerd[1478]: time="2025-05-14T00:01:35.360984863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 00:01:35.362102 containerd[1478]: time="2025-05-14T00:01:35.362074035Z" level=info msg="CreateContainer within sandbox \"e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:01:35.370843 containerd[1478]: time="2025-05-14T00:01:35.370802251Z" level=info msg="Container 406d80b97288e9031360791066e56bd8de90c24929d6439ef9e6b72e406fcd22: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:35.375552 containerd[1478]: time="2025-05-14T00:01:35.375516793Z" level=info msg="CreateContainer within sandbox \"e81604ace592e9c61377d417991811f0c17866284a08e7a381f2dd8edf648154\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"406d80b97288e9031360791066e56bd8de90c24929d6439ef9e6b72e406fcd22\"" May 14 00:01:35.376044 containerd[1478]: time="2025-05-14T00:01:35.376017552Z" level=info msg="StartContainer for \"406d80b97288e9031360791066e56bd8de90c24929d6439ef9e6b72e406fcd22\"" May 14 00:01:35.376844 containerd[1478]: time="2025-05-14T00:01:35.376821119Z" level=info msg="connecting to shim 406d80b97288e9031360791066e56bd8de90c24929d6439ef9e6b72e406fcd22" address="unix:///run/containerd/s/cf11cad6be3827a9f1bf6f0c031f8c07e95f857377fd06920533bbc2134c4d7b" protocol=ttrpc version=3 May 14 00:01:35.401599 systemd[1]: Started cri-containerd-406d80b97288e9031360791066e56bd8de90c24929d6439ef9e6b72e406fcd22.scope - libcontainer container 406d80b97288e9031360791066e56bd8de90c24929d6439ef9e6b72e406fcd22. May 14 00:01:35.437780 containerd[1478]: time="2025-05-14T00:01:35.437544689Z" level=info msg="StartContainer for \"406d80b97288e9031360791066e56bd8de90c24929d6439ef9e6b72e406fcd22\" returns successfully" May 14 00:01:35.888608 containerd[1478]: time="2025-05-14T00:01:35.888557604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6569c5bb96-gk456,Uid:89073857-5d38-4c76-b9e5-8939dfb86770,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:36.057565 kubelet[2706]: I0514 00:01:36.057284 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xswn6" podStartSLOduration=33.057268568 podStartE2EDuration="33.057268568s" podCreationTimestamp="2025-05-14 00:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:36.056405355 +0000 UTC m=+47.250511072" watchObservedRunningTime="2025-05-14 00:01:36.057268568 +0000 UTC m=+47.251374245" May 14 00:01:36.068916 systemd-networkd[1398]: cali5382d86325f: Link UP May 14 00:01:36.070164 systemd-networkd[1398]: cali5382d86325f: Gained carrier May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:35.984 [INFO][4366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0 calico-apiserver-6569c5bb96- calico-apiserver 89073857-5d38-4c76-b9e5-8939dfb86770 753 0 2025-05-14 00:01:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6569c5bb96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6569c5bb96-gk456 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5382d86325f [] []}} ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-gk456" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--gk456-" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:35.984 [INFO][4366] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-gk456" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.012 [INFO][4381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" HandleID="k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Workload="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.024 [INFO][4381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" HandleID="k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Workload="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000429d90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6569c5bb96-gk456", "timestamp":"2025-05-14 00:01:36.012120881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.024 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.024 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.024 [INFO][4381] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.026 [INFO][4381] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.031 [INFO][4381] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.037 [INFO][4381] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.041 [INFO][4381] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.045 [INFO][4381] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.045 [INFO][4381] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.046 [INFO][4381] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25 May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.053 [INFO][4381] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.063 [INFO][4381] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.063 [INFO][4381] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" host="localhost" May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.063 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:36.089726 containerd[1478]: 2025-05-14 00:01:36.063 [INFO][4381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" HandleID="k8s-pod-network.1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Workload="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" May 14 00:01:36.090748 containerd[1478]: 2025-05-14 00:01:36.066 [INFO][4366] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-gk456" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0", GenerateName:"calico-apiserver-6569c5bb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"89073857-5d38-4c76-b9e5-8939dfb86770", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6569c5bb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6569c5bb96-gk456", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5382d86325f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:36.090748 containerd[1478]: 2025-05-14 00:01:36.067 [INFO][4366] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-gk456" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" May 14 00:01:36.090748 containerd[1478]: 2025-05-14 00:01:36.067 [INFO][4366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5382d86325f ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-gk456" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" May 14 00:01:36.090748 containerd[1478]: 2025-05-14 00:01:36.069 [INFO][4366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-gk456" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" May 14 00:01:36.090748 containerd[1478]: 2025-05-14 00:01:36.070 [INFO][4366] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-gk456" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0", GenerateName:"calico-apiserver-6569c5bb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"89073857-5d38-4c76-b9e5-8939dfb86770", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6569c5bb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25", Pod:"calico-apiserver-6569c5bb96-gk456", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5382d86325f", MAC:"92:95:55:4b:0d:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:36.090748 containerd[1478]: 2025-05-14 00:01:36.086 [INFO][4366] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-gk456" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--gk456-eth0" May 14 00:01:36.117126 containerd[1478]: time="2025-05-14T00:01:36.116922053Z" level=info msg="connecting to shim 1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25" address="unix:///run/containerd/s/fb3672041288bb9cb7e71e587763e78014fc786fee5b12f6b6739ca0e079b444" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:36.142530 systemd[1]: Started cri-containerd-1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25.scope - libcontainer container 1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25. May 14 00:01:36.153059 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:36.172959 containerd[1478]: time="2025-05-14T00:01:36.172911893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6569c5bb96-gk456,Uid:89073857-5d38-4c76-b9e5-8939dfb86770,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25\"" May 14 00:01:36.515869 containerd[1478]: time="2025-05-14T00:01:36.515754358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:36.516678 containerd[1478]: time="2025-05-14T00:01:36.516633334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 00:01:36.517395 containerd[1478]: time="2025-05-14T00:01:36.517328961Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:36.519524 containerd[1478]: time="2025-05-14T00:01:36.519482654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:36.520208 containerd[1478]: time="2025-05-14T00:01:36.520180481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.159165053s" May 14 00:01:36.520252 containerd[1478]: time="2025-05-14T00:01:36.520212486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 00:01:36.521036 containerd[1478]: time="2025-05-14T00:01:36.520974084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:01:36.523283 containerd[1478]: time="2025-05-14T00:01:36.523248955Z" level=info msg="CreateContainer within sandbox \"2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 00:01:36.531865 containerd[1478]: time="2025-05-14T00:01:36.531761189Z" level=info msg="Container 30110a1df1e2e7868a4427b2645f8c0a370af7553c6ca1850f4f8d26d3fae461: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:36.542693 containerd[1478]: time="2025-05-14T00:01:36.542644508Z" level=info msg="CreateContainer within sandbox \"2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"30110a1df1e2e7868a4427b2645f8c0a370af7553c6ca1850f4f8d26d3fae461\"" May 14 00:01:36.543648 containerd[1478]: time="2025-05-14T00:01:36.543334414Z" level=info msg="StartContainer for \"30110a1df1e2e7868a4427b2645f8c0a370af7553c6ca1850f4f8d26d3fae461\"" May 14 00:01:36.545711 containerd[1478]: time="2025-05-14T00:01:36.545665574Z" level=info msg="connecting to shim 30110a1df1e2e7868a4427b2645f8c0a370af7553c6ca1850f4f8d26d3fae461" address="unix:///run/containerd/s/ada55b3cb42a2a43b456533b666788f8cee7584fcfc725f9fb998daf9c3779f0" protocol=ttrpc version=3 May 14 00:01:36.572548 systemd[1]: Started cri-containerd-30110a1df1e2e7868a4427b2645f8c0a370af7553c6ca1850f4f8d26d3fae461.scope - libcontainer container 30110a1df1e2e7868a4427b2645f8c0a370af7553c6ca1850f4f8d26d3fae461. May 14 00:01:36.612327 containerd[1478]: time="2025-05-14T00:01:36.612289535Z" level=info msg="StartContainer for \"30110a1df1e2e7868a4427b2645f8c0a370af7553c6ca1850f4f8d26d3fae461\" returns successfully" May 14 00:01:36.643529 systemd-networkd[1398]: cali59b9575d0be: Gained IPv6LL May 14 00:01:36.707542 systemd-networkd[1398]: cali3095cbb9336: Gained IPv6LL May 14 00:01:36.888740 containerd[1478]: time="2025-05-14T00:01:36.888704030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-thdkj,Uid:7da5a975-f73e-4520-af3e-1620cd792c56,Namespace:kube-system,Attempt:0,}" May 14 00:01:36.986581 systemd-networkd[1398]: cali133c992ccd4: Link UP May 14 00:01:36.987550 systemd-networkd[1398]: cali133c992ccd4: Gained carrier May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.924 [INFO][4488] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0 coredns-7db6d8ff4d- kube-system 7da5a975-f73e-4520-af3e-1620cd792c56 751 0 2025-05-14 00:01:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-thdkj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali133c992ccd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thdkj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--thdkj-" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.924 [INFO][4488] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thdkj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.948 [INFO][4502] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" HandleID="k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Workload="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.958 [INFO][4502] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" HandleID="k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Workload="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000279da0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-thdkj", "timestamp":"2025-05-14 00:01:36.948614355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.959 [INFO][4502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.959 [INFO][4502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.959 [INFO][4502] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.960 [INFO][4502] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.964 [INFO][4502] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.967 [INFO][4502] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.969 [INFO][4502] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.971 [INFO][4502] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.971 [INFO][4502] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.972 [INFO][4502] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.976 [INFO][4502] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.982 [INFO][4502] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.982 [INFO][4502] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" host="localhost" May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.982 [INFO][4502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:36.998299 containerd[1478]: 2025-05-14 00:01:36.982 [INFO][4502] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" HandleID="k8s-pod-network.4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Workload="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" May 14 00:01:37.001052 containerd[1478]: 2025-05-14 00:01:36.984 [INFO][4488] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thdkj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7da5a975-f73e-4520-af3e-1620cd792c56", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-thdkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali133c992ccd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:37.001052 containerd[1478]: 2025-05-14 00:01:36.984 [INFO][4488] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thdkj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" May 14 00:01:37.001052 containerd[1478]: 2025-05-14 00:01:36.984 [INFO][4488] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali133c992ccd4 ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thdkj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" May 14 00:01:37.001052 containerd[1478]: 2025-05-14 00:01:36.986 [INFO][4488] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thdkj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" May 14 00:01:37.001052 containerd[1478]: 2025-05-14 00:01:36.987 [INFO][4488] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thdkj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7da5a975-f73e-4520-af3e-1620cd792c56", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de", Pod:"coredns-7db6d8ff4d-thdkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali133c992ccd4", MAC:"de:77:09:c1:f3:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:37.001052 containerd[1478]: 2025-05-14 00:01:36.996 [INFO][4488] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-thdkj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--thdkj-eth0" May 14 00:01:37.026304 containerd[1478]: time="2025-05-14T00:01:37.025695655Z" level=info msg="connecting to shim 4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de" address="unix:///run/containerd/s/7b2cab85ce5c72beaac4003cf18a4c06eb05dcfb87a6254d37b3a116bce797fb" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:37.050578 systemd[1]: Started cri-containerd-4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de.scope - libcontainer container 4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de. May 14 00:01:37.061555 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:37.081245 containerd[1478]: time="2025-05-14T00:01:37.081200289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-thdkj,Uid:7da5a975-f73e-4520-af3e-1620cd792c56,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de\"" May 14 00:01:37.084255 containerd[1478]: time="2025-05-14T00:01:37.084230067Z" level=info msg="CreateContainer within sandbox \"4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:01:37.092032 containerd[1478]: time="2025-05-14T00:01:37.092000683Z" level=info msg="Container 63af4131cd9d07e62401c3836f7877af8a0e48c727837e90eb55b5a81946bbf5: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:37.098180 containerd[1478]: time="2025-05-14T00:01:37.098142572Z" level=info msg="CreateContainer within sandbox \"4a11e73c5ffff0e8b8e5adca071f388545982bfffb6abdb93943045a6189e8de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63af4131cd9d07e62401c3836f7877af8a0e48c727837e90eb55b5a81946bbf5\"" May 14 00:01:37.099863 containerd[1478]: time="2025-05-14T00:01:37.099823026Z" level=info msg="StartContainer for \"63af4131cd9d07e62401c3836f7877af8a0e48c727837e90eb55b5a81946bbf5\"" May 14 00:01:37.100846 containerd[1478]: time="2025-05-14T00:01:37.100773169Z" level=info msg="connecting to shim 63af4131cd9d07e62401c3836f7877af8a0e48c727837e90eb55b5a81946bbf5" address="unix:///run/containerd/s/7b2cab85ce5c72beaac4003cf18a4c06eb05dcfb87a6254d37b3a116bce797fb" protocol=ttrpc version=3 May 14 00:01:37.121522 systemd[1]: Started cri-containerd-63af4131cd9d07e62401c3836f7877af8a0e48c727837e90eb55b5a81946bbf5.scope - libcontainer container 63af4131cd9d07e62401c3836f7877af8a0e48c727837e90eb55b5a81946bbf5. May 14 00:01:37.151415 containerd[1478]: time="2025-05-14T00:01:37.151301971Z" level=info msg="StartContainer for \"63af4131cd9d07e62401c3836f7877af8a0e48c727837e90eb55b5a81946bbf5\" returns successfully" May 14 00:01:37.347603 systemd-networkd[1398]: cali5382d86325f: Gained IPv6LL May 14 00:01:38.075605 kubelet[2706]: I0514 00:01:38.075545 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-thdkj" podStartSLOduration=35.075527856 podStartE2EDuration="35.075527856s" podCreationTimestamp="2025-05-14 00:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:38.074377885 +0000 UTC m=+49.268483602" watchObservedRunningTime="2025-05-14 00:01:38.075527856 +0000 UTC m=+49.269633573" May 14 00:01:38.281407 containerd[1478]: time="2025-05-14T00:01:38.281204610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:38.282425 containerd[1478]: time="2025-05-14T00:01:38.282221761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 00:01:38.283550 containerd[1478]: time="2025-05-14T00:01:38.283275558Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:38.285274 containerd[1478]: time="2025-05-14T00:01:38.285218646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:38.286016 containerd[1478]: time="2025-05-14T00:01:38.285892546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.764888138s" May 14 00:01:38.286016 containerd[1478]: time="2025-05-14T00:01:38.285924751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 00:01:38.287123 containerd[1478]: time="2025-05-14T00:01:38.286938381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 00:01:38.287813 containerd[1478]: time="2025-05-14T00:01:38.287779866Z" level=info msg="CreateContainer within sandbox \"1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:01:38.293242 containerd[1478]: time="2025-05-14T00:01:38.293210632Z" level=info msg="Container ebeeaa85a09f4f1d37f0cedd6a28e49319a90daa364daea4f09aa355f1254513: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:38.300668 containerd[1478]: time="2025-05-14T00:01:38.300619571Z" level=info msg="CreateContainer within sandbox \"1f2f7c5e4c179e6ef1423187db79f6d8b51ec4e9d8ef4e6df6627fcec9622a25\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ebeeaa85a09f4f1d37f0cedd6a28e49319a90daa364daea4f09aa355f1254513\"" May 14 00:01:38.301145 containerd[1478]: time="2025-05-14T00:01:38.301105803Z" level=info msg="StartContainer for \"ebeeaa85a09f4f1d37f0cedd6a28e49319a90daa364daea4f09aa355f1254513\"" May 14 00:01:38.302386 containerd[1478]: time="2025-05-14T00:01:38.302181083Z" level=info msg="connecting to shim ebeeaa85a09f4f1d37f0cedd6a28e49319a90daa364daea4f09aa355f1254513" address="unix:///run/containerd/s/fb3672041288bb9cb7e71e587763e78014fc786fee5b12f6b6739ca0e079b444" protocol=ttrpc version=3 May 14 00:01:38.322557 systemd[1]: Started cri-containerd-ebeeaa85a09f4f1d37f0cedd6a28e49319a90daa364daea4f09aa355f1254513.scope - libcontainer container ebeeaa85a09f4f1d37f0cedd6a28e49319a90daa364daea4f09aa355f1254513. May 14 00:01:38.373958 containerd[1478]: time="2025-05-14T00:01:38.373897242Z" level=info msg="StartContainer for \"ebeeaa85a09f4f1d37f0cedd6a28e49319a90daa364daea4f09aa355f1254513\" returns successfully" May 14 00:01:38.892897 containerd[1478]: time="2025-05-14T00:01:38.892854596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6569c5bb96-7w2lt,Uid:aa9357b4-acf1-400c-abd6-381a0f8fc5e9,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:38.893455 containerd[1478]: time="2025-05-14T00:01:38.893141918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688cf4f594-2dlhr,Uid:c01206de-1dda-4f2c-824b-aa2a43af3da7,Namespace:calico-system,Attempt:0,}" May 14 00:01:39.012488 systemd-networkd[1398]: cali133c992ccd4: Gained IPv6LL May 14 00:01:39.046126 systemd-networkd[1398]: calif3024e8b9d6: Link UP May 14 00:01:39.046607 systemd-networkd[1398]: calif3024e8b9d6: Gained carrier May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:38.948 [INFO][4651] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0 calico-apiserver-6569c5bb96- calico-apiserver aa9357b4-acf1-400c-abd6-381a0f8fc5e9 747 0 2025-05-14 00:01:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6569c5bb96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6569c5bb96-7w2lt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif3024e8b9d6 [] []}} ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-7w2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:38.948 [INFO][4651] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-7w2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:38.982 [INFO][4684] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" HandleID="k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Workload="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:38.996 [INFO][4684] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" HandleID="k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Workload="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d0420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6569c5bb96-7w2lt", "timestamp":"2025-05-14 00:01:38.982280143 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:38.996 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:38.996 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:38.996 [INFO][4684] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:38.999 [INFO][4684] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.006 [INFO][4684] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.014 [INFO][4684] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.019 [INFO][4684] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.024 [INFO][4684] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.024 [INFO][4684] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.027 [INFO][4684] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2 May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.032 [INFO][4684] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.040 [INFO][4684] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.040 [INFO][4684] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" host="localhost" May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.040 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:39.065694 containerd[1478]: 2025-05-14 00:01:39.040 [INFO][4684] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" HandleID="k8s-pod-network.86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Workload="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" May 14 00:01:39.067438 containerd[1478]: 2025-05-14 00:01:39.044 [INFO][4651] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-7w2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0", GenerateName:"calico-apiserver-6569c5bb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa9357b4-acf1-400c-abd6-381a0f8fc5e9", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6569c5bb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6569c5bb96-7w2lt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3024e8b9d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:39.067438 containerd[1478]: 2025-05-14 00:01:39.044 [INFO][4651] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-7w2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" May 14 00:01:39.067438 containerd[1478]: 2025-05-14 00:01:39.044 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3024e8b9d6 ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-7w2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" May 14 00:01:39.067438 containerd[1478]: 2025-05-14 00:01:39.047 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-7w2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" May 14 00:01:39.067438 containerd[1478]: 2025-05-14 00:01:39.048 [INFO][4651] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-7w2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0", GenerateName:"calico-apiserver-6569c5bb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa9357b4-acf1-400c-abd6-381a0f8fc5e9", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6569c5bb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2", Pod:"calico-apiserver-6569c5bb96-7w2lt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3024e8b9d6", MAC:"1e:58:d3:08:5d:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:39.067438 containerd[1478]: 2025-05-14 00:01:39.061 [INFO][4651] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" Namespace="calico-apiserver" Pod="calico-apiserver-6569c5bb96-7w2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6569c5bb96--7w2lt-eth0" May 14 00:01:39.082219 kubelet[2706]: I0514 00:01:39.081796 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6569c5bb96-gk456" podStartSLOduration=24.969017154 podStartE2EDuration="27.081443716s" podCreationTimestamp="2025-05-14 00:01:12 +0000 UTC" firstStartedPulling="2025-05-14 00:01:36.174334313 +0000 UTC m=+47.368440030" lastFinishedPulling="2025-05-14 00:01:38.286760875 +0000 UTC m=+49.480866592" observedRunningTime="2025-05-14 00:01:39.078572538 +0000 UTC m=+50.272678255" watchObservedRunningTime="2025-05-14 00:01:39.081443716 +0000 UTC m=+50.275549433" May 14 00:01:39.105698 systemd-networkd[1398]: cali17c5eb22b67: Link UP May 14 00:01:39.106191 systemd-networkd[1398]: cali17c5eb22b67: Gained carrier May 14 00:01:39.114836 containerd[1478]: time="2025-05-14T00:01:39.114640951Z" level=info msg="connecting to shim 86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2" address="unix:///run/containerd/s/1f0a983987fa98ae4081c811352f98cfc696702e32a7a10eb3de0785ab56ba12" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:38.950 [INFO][4662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0 calico-kube-controllers-688cf4f594- calico-system c01206de-1dda-4f2c-824b-aa2a43af3da7 745 0 2025-05-14 00:01:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:688cf4f594 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-688cf4f594-2dlhr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali17c5eb22b67 [] []}} ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Namespace="calico-system" Pod="calico-kube-controllers-688cf4f594-2dlhr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:38.950 [INFO][4662] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Namespace="calico-system" Pod="calico-kube-controllers-688cf4f594-2dlhr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:38.987 [INFO][4682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" HandleID="k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Workload="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.003 [INFO][4682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" HandleID="k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Workload="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003634d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-688cf4f594-2dlhr", "timestamp":"2025-05-14 00:01:38.987865211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.003 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.040 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.040 [INFO][4682] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.042 [INFO][4682] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.053 [INFO][4682] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.069 [INFO][4682] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.076 [INFO][4682] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.082 [INFO][4682] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.082 [INFO][4682] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.084 [INFO][4682] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.088 [INFO][4682] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.096 [INFO][4682] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.096 [INFO][4682] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" host="localhost" May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.096 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:39.132452 containerd[1478]: 2025-05-14 00:01:39.096 [INFO][4682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" HandleID="k8s-pod-network.2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Workload="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" May 14 00:01:39.133032 containerd[1478]: 2025-05-14 00:01:39.102 [INFO][4662] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Namespace="calico-system" Pod="calico-kube-controllers-688cf4f594-2dlhr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0", GenerateName:"calico-kube-controllers-688cf4f594-", Namespace:"calico-system", SelfLink:"", UID:"c01206de-1dda-4f2c-824b-aa2a43af3da7", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"688cf4f594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-688cf4f594-2dlhr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17c5eb22b67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:39.133032 containerd[1478]: 2025-05-14 00:01:39.102 [INFO][4662] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Namespace="calico-system" Pod="calico-kube-controllers-688cf4f594-2dlhr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" May 14 00:01:39.133032 containerd[1478]: 2025-05-14 00:01:39.102 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17c5eb22b67 ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Namespace="calico-system" Pod="calico-kube-controllers-688cf4f594-2dlhr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" May 14 00:01:39.133032 containerd[1478]: 2025-05-14 00:01:39.107 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Namespace="calico-system" Pod="calico-kube-controllers-688cf4f594-2dlhr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" May 14 00:01:39.133032 containerd[1478]: 2025-05-14 00:01:39.109 [INFO][4662] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Namespace="calico-system" Pod="calico-kube-controllers-688cf4f594-2dlhr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0", GenerateName:"calico-kube-controllers-688cf4f594-", Namespace:"calico-system", SelfLink:"", UID:"c01206de-1dda-4f2c-824b-aa2a43af3da7", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"688cf4f594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b", Pod:"calico-kube-controllers-688cf4f594-2dlhr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17c5eb22b67", MAC:"7e:a1:72:ad:f1:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:39.133032 containerd[1478]: 2025-05-14 00:01:39.128 [INFO][4662] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" Namespace="calico-system" Pod="calico-kube-controllers-688cf4f594-2dlhr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688cf4f594--2dlhr-eth0" May 14 00:01:39.152547 systemd[1]: Started cri-containerd-86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2.scope - libcontainer container 86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2. May 14 00:01:39.171534 containerd[1478]: time="2025-05-14T00:01:39.171459828Z" level=info msg="connecting to shim 2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b" address="unix:///run/containerd/s/9fab209cc45d2f7edc070c432c07dfd0640c45739ee137175b746e26e40d7d7a" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:39.191809 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:39.199822 systemd[1]: Started cri-containerd-2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b.scope - libcontainer container 2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b. May 14 00:01:39.225911 systemd[1]: Started sshd@13-10.0.0.146:22-10.0.0.1:53668.service - OpenSSH per-connection server daemon (10.0.0.1:53668). May 14 00:01:39.231763 containerd[1478]: time="2025-05-14T00:01:39.231716125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6569c5bb96-7w2lt,Uid:aa9357b4-acf1-400c-abd6-381a0f8fc5e9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2\"" May 14 00:01:39.232951 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:39.238569 containerd[1478]: time="2025-05-14T00:01:39.237989158Z" level=info msg="CreateContainer within sandbox \"86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:01:39.248523 containerd[1478]: time="2025-05-14T00:01:39.248481327Z" level=info msg="Container ab89df8c4c9ce3b664d01119c20ad5ab3d715832c12540591315a8b2eee65c8b: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:39.261212 containerd[1478]: time="2025-05-14T00:01:39.261071921Z" level=info msg="CreateContainer within sandbox \"86af5a7bc2242c78377ec2feb22079ce3f3bf8a33ef203ac452753015dea3fe2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab89df8c4c9ce3b664d01119c20ad5ab3d715832c12540591315a8b2eee65c8b\"" May 14 00:01:39.262086 containerd[1478]: time="2025-05-14T00:01:39.262057424Z" level=info msg="StartContainer for \"ab89df8c4c9ce3b664d01119c20ad5ab3d715832c12540591315a8b2eee65c8b\"" May 14 00:01:39.263127 containerd[1478]: time="2025-05-14T00:01:39.263097416Z" level=info msg="connecting to shim ab89df8c4c9ce3b664d01119c20ad5ab3d715832c12540591315a8b2eee65c8b" address="unix:///run/containerd/s/1f0a983987fa98ae4081c811352f98cfc696702e32a7a10eb3de0785ab56ba12" protocol=ttrpc version=3 May 14 00:01:39.298613 systemd[1]: Started cri-containerd-ab89df8c4c9ce3b664d01119c20ad5ab3d715832c12540591315a8b2eee65c8b.scope - libcontainer container ab89df8c4c9ce3b664d01119c20ad5ab3d715832c12540591315a8b2eee65c8b. May 14 00:01:39.348415 containerd[1478]: time="2025-05-14T00:01:39.348378478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688cf4f594-2dlhr,Uid:c01206de-1dda-4f2c-824b-aa2a43af3da7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b\"" May 14 00:01:39.368759 sshd[4816]: Accepted publickey for core from 10.0.0.1 port 53668 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:39.372636 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:39.379289 systemd-logind[1465]: New session 14 of user core. May 14 00:01:39.385031 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:01:39.397218 containerd[1478]: time="2025-05-14T00:01:39.396926629Z" level=info msg="StartContainer for \"ab89df8c4c9ce3b664d01119c20ad5ab3d715832c12540591315a8b2eee65c8b\" returns successfully" May 14 00:01:39.615660 sshd[4857]: Connection closed by 10.0.0.1 port 53668 May 14 00:01:39.616014 sshd-session[4816]: pam_unix(sshd:session): session closed for user core May 14 00:01:39.620621 systemd[1]: sshd@13-10.0.0.146:22-10.0.0.1:53668.service: Deactivated successfully. May 14 00:01:39.622614 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:01:39.624182 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. May 14 00:01:39.625506 systemd-logind[1465]: Removed session 14. May 14 00:01:39.628583 containerd[1478]: time="2025-05-14T00:01:39.628021371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:39.629056 containerd[1478]: time="2025-05-14T00:01:39.629000714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 00:01:39.630072 containerd[1478]: time="2025-05-14T00:01:39.629961333Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:39.631756 containerd[1478]: time="2025-05-14T00:01:39.631727231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:39.633155 containerd[1478]: time="2025-05-14T00:01:39.633129035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.34616013s" May 14 00:01:39.633280 containerd[1478]: time="2025-05-14T00:01:39.633260494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 00:01:39.634424 containerd[1478]: time="2025-05-14T00:01:39.634401580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 00:01:39.636096 containerd[1478]: time="2025-05-14T00:01:39.636022616Z" level=info msg="CreateContainer within sandbox \"2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 00:01:39.645105 containerd[1478]: time="2025-05-14T00:01:39.645055012Z" level=info msg="Container 747336ed294975007d84707290469a9f756f6c06e788c3f1667048f8748b1d0b: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:39.655895 containerd[1478]: time="2025-05-14T00:01:39.655769533Z" level=info msg="CreateContainer within sandbox \"2f4888e7c22ce0d6df607457519656ac647bfb06348d4a4b83e5a23801784a94\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"747336ed294975007d84707290469a9f756f6c06e788c3f1667048f8748b1d0b\"" May 14 00:01:39.656467 containerd[1478]: time="2025-05-14T00:01:39.656411546Z" level=info msg="StartContainer for \"747336ed294975007d84707290469a9f756f6c06e788c3f1667048f8748b1d0b\"" May 14 00:01:39.659535 containerd[1478]: time="2025-05-14T00:01:39.659506477Z" level=info msg="connecting to shim 747336ed294975007d84707290469a9f756f6c06e788c3f1667048f8748b1d0b" address="unix:///run/containerd/s/ada55b3cb42a2a43b456533b666788f8cee7584fcfc725f9fb998daf9c3779f0" protocol=ttrpc version=3 May 14 00:01:39.679535 systemd[1]: Started cri-containerd-747336ed294975007d84707290469a9f756f6c06e788c3f1667048f8748b1d0b.scope - libcontainer container 747336ed294975007d84707290469a9f756f6c06e788c3f1667048f8748b1d0b. May 14 00:01:39.728864 containerd[1478]: time="2025-05-14T00:01:39.728751283Z" level=info msg="StartContainer for \"747336ed294975007d84707290469a9f756f6c06e788c3f1667048f8748b1d0b\" returns successfully" May 14 00:01:39.987158 kubelet[2706]: I0514 00:01:39.986797 2706 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 00:01:39.991264 kubelet[2706]: I0514 00:01:39.991227 2706 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 00:01:40.077963 kubelet[2706]: I0514 00:01:40.077175 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:40.084585 kubelet[2706]: I0514 00:01:40.083784 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-72zk5" podStartSLOduration=21.810355706 podStartE2EDuration="26.083766224s" podCreationTimestamp="2025-05-14 00:01:14 +0000 UTC" firstStartedPulling="2025-05-14 00:01:35.360810516 +0000 UTC m=+46.554916193" lastFinishedPulling="2025-05-14 00:01:39.634220994 +0000 UTC m=+50.828326711" observedRunningTime="2025-05-14 00:01:40.083588718 +0000 UTC m=+51.277694395" watchObservedRunningTime="2025-05-14 00:01:40.083766224 +0000 UTC m=+51.277871981" May 14 00:01:40.093572 kubelet[2706]: I0514 00:01:40.093325 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6569c5bb96-7w2lt" podStartSLOduration=28.09330923 podStartE2EDuration="28.09330923s" podCreationTimestamp="2025-05-14 00:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:40.092301366 +0000 UTC m=+51.286407083" watchObservedRunningTime="2025-05-14 00:01:40.09330923 +0000 UTC m=+51.287414907" May 14 00:01:40.355492 systemd-networkd[1398]: calif3024e8b9d6: Gained IPv6LL May 14 00:01:40.547533 systemd-networkd[1398]: cali17c5eb22b67: Gained IPv6LL May 14 00:01:41.078844 kubelet[2706]: I0514 00:01:41.078811 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:41.166177 containerd[1478]: time="2025-05-14T00:01:41.166128711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:41.166915 containerd[1478]: time="2025-05-14T00:01:41.166870575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 00:01:41.168084 containerd[1478]: time="2025-05-14T00:01:41.167768662Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:41.169993 containerd[1478]: time="2025-05-14T00:01:41.169959090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:41.170685 containerd[1478]: time="2025-05-14T00:01:41.170659388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.536230124s" May 14 00:01:41.170800 containerd[1478]: time="2025-05-14T00:01:41.170780646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 00:01:41.186352 containerd[1478]: time="2025-05-14T00:01:41.186318113Z" level=info msg="CreateContainer within sandbox \"2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 00:01:41.195441 containerd[1478]: time="2025-05-14T00:01:41.195401111Z" level=info msg="Container 8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:41.196794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284711282.mount: Deactivated successfully. May 14 00:01:41.203698 containerd[1478]: time="2025-05-14T00:01:41.203655753Z" level=info msg="CreateContainer within sandbox \"2937c7ff685c5bb0849eab38a747573c00286a3cfda4618dbfefaa97d6cda28b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e\"" May 14 00:01:41.204080 containerd[1478]: time="2025-05-14T00:01:41.204047728Z" level=info msg="StartContainer for \"8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e\"" May 14 00:01:41.205123 containerd[1478]: time="2025-05-14T00:01:41.205096396Z" level=info msg="connecting to shim 8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e" address="unix:///run/containerd/s/9fab209cc45d2f7edc070c432c07dfd0640c45739ee137175b746e26e40d7d7a" protocol=ttrpc version=3 May 14 00:01:41.233527 systemd[1]: Started cri-containerd-8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e.scope - libcontainer container 8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e. May 14 00:01:41.276723 containerd[1478]: time="2025-05-14T00:01:41.276671790Z" level=info msg="StartContainer for \"8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e\" returns successfully" May 14 00:01:42.095136 kubelet[2706]: I0514 00:01:42.094557 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-688cf4f594-2dlhr" podStartSLOduration=26.273448889 podStartE2EDuration="28.094540182s" podCreationTimestamp="2025-05-14 00:01:14 +0000 UTC" firstStartedPulling="2025-05-14 00:01:39.350369288 +0000 UTC m=+50.544474965" lastFinishedPulling="2025-05-14 00:01:41.171460541 +0000 UTC m=+52.365566258" observedRunningTime="2025-05-14 00:01:42.094388361 +0000 UTC m=+53.288494078" watchObservedRunningTime="2025-05-14 00:01:42.094540182 +0000 UTC m=+53.288645899" May 14 00:01:42.124057 containerd[1478]: time="2025-05-14T00:01:42.124016825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e\" id:\"26190b2a67a04f18ef73b99f3d41df185cb8f7f01eeed1f6277dca23971240b3\" pid:4969 exited_at:{seconds:1747180902 nanos:122810818}" May 14 00:01:44.632196 systemd[1]: Started sshd@14-10.0.0.146:22-10.0.0.1:39420.service - OpenSSH per-connection server daemon (10.0.0.1:39420). May 14 00:01:44.696235 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 39420 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:44.699762 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:44.705010 systemd-logind[1465]: New session 15 of user core. May 14 00:01:44.723644 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:01:44.794509 kubelet[2706]: I0514 00:01:44.794444 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:44.902987 sshd[4985]: Connection closed by 10.0.0.1 port 39420 May 14 00:01:44.903326 sshd-session[4983]: pam_unix(sshd:session): session closed for user core May 14 00:01:44.908340 systemd[1]: sshd@14-10.0.0.146:22-10.0.0.1:39420.service: Deactivated successfully. May 14 00:01:44.910821 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:01:44.911822 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. May 14 00:01:44.914030 systemd-logind[1465]: Removed session 15. May 14 00:01:49.918983 systemd[1]: Started sshd@15-10.0.0.146:22-10.0.0.1:39434.service - OpenSSH per-connection server daemon (10.0.0.1:39434). May 14 00:01:49.981545 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 39434 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:49.983451 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:49.987894 systemd-logind[1465]: New session 16 of user core. May 14 00:01:49.997568 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:01:50.187183 sshd[5008]: Connection closed by 10.0.0.1 port 39434 May 14 00:01:50.188167 sshd-session[5006]: pam_unix(sshd:session): session closed for user core May 14 00:01:50.206214 systemd[1]: sshd@15-10.0.0.146:22-10.0.0.1:39434.service: Deactivated successfully. May 14 00:01:50.208187 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:01:50.210560 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. May 14 00:01:50.212726 systemd[1]: Started sshd@16-10.0.0.146:22-10.0.0.1:39442.service - OpenSSH per-connection server daemon (10.0.0.1:39442). May 14 00:01:50.214240 systemd-logind[1465]: Removed session 16. May 14 00:01:50.271897 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 39442 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:50.273473 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:50.278467 systemd-logind[1465]: New session 17 of user core. May 14 00:01:50.293596 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:01:50.505479 sshd[5024]: Connection closed by 10.0.0.1 port 39442 May 14 00:01:50.505421 sshd-session[5021]: pam_unix(sshd:session): session closed for user core May 14 00:01:50.516590 systemd[1]: sshd@16-10.0.0.146:22-10.0.0.1:39442.service: Deactivated successfully. May 14 00:01:50.518555 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:01:50.519385 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. May 14 00:01:50.522250 systemd[1]: Started sshd@17-10.0.0.146:22-10.0.0.1:39458.service - OpenSSH per-connection server daemon (10.0.0.1:39458). May 14 00:01:50.523428 systemd-logind[1465]: Removed session 17. May 14 00:01:50.583400 sshd[5034]: Accepted publickey for core from 10.0.0.1 port 39458 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:50.584984 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:50.592305 systemd-logind[1465]: New session 18 of user core. May 14 00:01:50.599546 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:01:52.124398 sshd[5043]: Connection closed by 10.0.0.1 port 39458 May 14 00:01:52.125213 sshd-session[5034]: pam_unix(sshd:session): session closed for user core May 14 00:01:52.143659 systemd[1]: Started sshd@18-10.0.0.146:22-10.0.0.1:39474.service - OpenSSH per-connection server daemon (10.0.0.1:39474). May 14 00:01:52.144706 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. May 14 00:01:52.145098 systemd[1]: sshd@17-10.0.0.146:22-10.0.0.1:39458.service: Deactivated successfully. May 14 00:01:52.146704 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:01:52.146906 systemd[1]: session-18.scope: Consumed 542ms CPU time, 68.4M memory peak. May 14 00:01:52.152889 systemd-logind[1465]: Removed session 18. May 14 00:01:52.219988 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 39474 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:52.221612 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:52.226976 systemd-logind[1465]: New session 19 of user core. May 14 00:01:52.232554 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:01:52.524250 sshd[5065]: Connection closed by 10.0.0.1 port 39474 May 14 00:01:52.524535 sshd-session[5058]: pam_unix(sshd:session): session closed for user core May 14 00:01:52.542654 systemd[1]: Started sshd@19-10.0.0.146:22-10.0.0.1:60658.service - OpenSSH per-connection server daemon (10.0.0.1:60658). May 14 00:01:52.543140 systemd[1]: sshd@18-10.0.0.146:22-10.0.0.1:39474.service: Deactivated successfully. May 14 00:01:52.553094 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:01:52.555188 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. May 14 00:01:52.557050 systemd-logind[1465]: Removed session 19. May 14 00:01:52.593025 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 60658 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:52.594521 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:52.601018 systemd-logind[1465]: New session 20 of user core. May 14 00:01:52.612611 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:01:52.761796 sshd[5078]: Connection closed by 10.0.0.1 port 60658 May 14 00:01:52.762154 sshd-session[5073]: pam_unix(sshd:session): session closed for user core May 14 00:01:52.765958 systemd[1]: sshd@19-10.0.0.146:22-10.0.0.1:60658.service: Deactivated successfully. May 14 00:01:52.769590 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:01:52.770751 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit. May 14 00:01:52.771857 systemd-logind[1465]: Removed session 20. May 14 00:01:54.279123 containerd[1478]: time="2025-05-14T00:01:54.279078715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b23e9b0a762349a10d5e9c656e87b953851a1c8e06dc760bf220cedc659d12e\" id:\"b0bc78e6f1832c756672dc795517f5b81747673aee6d68b81213cc0144470184\" pid:5102 exited_at:{seconds:1747180914 nanos:278819564}" May 14 00:01:55.467418 kubelet[2706]: I0514 00:01:55.467329 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:57.777646 systemd[1]: Started sshd@20-10.0.0.146:22-10.0.0.1:60666.service - OpenSSH per-connection server daemon (10.0.0.1:60666). May 14 00:01:57.830227 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 60666 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:01:57.831821 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:57.835516 systemd-logind[1465]: New session 21 of user core. May 14 00:01:57.848517 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:01:57.982236 sshd[5120]: Connection closed by 10.0.0.1 port 60666 May 14 00:01:57.982814 sshd-session[5118]: pam_unix(sshd:session): session closed for user core May 14 00:01:57.987505 systemd[1]: sshd@20-10.0.0.146:22-10.0.0.1:60666.service: Deactivated successfully. May 14 00:01:57.989548 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:01:57.991489 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit. May 14 00:01:57.992378 systemd-logind[1465]: Removed session 21. May 14 00:02:00.264237 containerd[1478]: time="2025-05-14T00:02:00.264187910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79a2dca209959e0030305d2a5c9442baf91500d1d698bf108acf9e07f6ffc4db\" id:\"1557d024be879e190c76c64590c868c3bee197226671bf7ef7d2c2071ef54f08\" pid:5147 exited_at:{seconds:1747180920 nanos:263844384}" May 14 00:02:02.994469 systemd[1]: Started sshd@21-10.0.0.146:22-10.0.0.1:47328.service - OpenSSH per-connection server daemon (10.0.0.1:47328). May 14 00:02:03.057327 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 47328 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:02:03.060009 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:03.064423 systemd-logind[1465]: New session 22 of user core. May 14 00:02:03.071581 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:02:03.200431 sshd[5163]: Connection closed by 10.0.0.1 port 47328 May 14 00:02:03.200771 sshd-session[5161]: pam_unix(sshd:session): session closed for user core May 14 00:02:03.204007 systemd[1]: sshd@21-10.0.0.146:22-10.0.0.1:47328.service: Deactivated successfully. May 14 00:02:03.206792 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:02:03.207618 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit. May 14 00:02:03.208465 systemd-logind[1465]: Removed session 22. May 14 00:02:08.216449 systemd[1]: Started sshd@22-10.0.0.146:22-10.0.0.1:47338.service - OpenSSH per-connection server daemon (10.0.0.1:47338). May 14 00:02:08.273690 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 47338 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 14 00:02:08.274942 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:08.279201 systemd-logind[1465]: New session 23 of user core. May 14 00:02:08.289566 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:02:08.429486 sshd[5181]: Connection closed by 10.0.0.1 port 47338 May 14 00:02:08.429864 sshd-session[5179]: pam_unix(sshd:session): session closed for user core May 14 00:02:08.436934 systemd-logind[1465]: Session 23 logged out. Waiting for processes to exit. May 14 00:02:08.437224 systemd[1]: sshd@22-10.0.0.146:22-10.0.0.1:47338.service: Deactivated successfully. May 14 00:02:08.439060 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:02:08.440983 systemd-logind[1465]: Removed session 23.