May 14 00:00:43.907733 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:00:43.907755 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 14 00:00:43.907764 kernel: KASLR enabled May 14 00:00:43.907770 kernel: efi: EFI v2.7 by EDK II May 14 00:00:43.907775 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb4ff018 ACPI 2.0=0xd93ef018 RNG=0xd93efa18 MEMRESERVE=0xd91e1f18 May 14 00:00:43.907781 kernel: random: crng init done May 14 00:00:43.907788 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 14 00:00:43.907793 kernel: secureboot: Secure boot enabled May 14 00:00:43.907799 kernel: ACPI: Early table checksum verification disabled May 14 00:00:43.907805 kernel: ACPI: RSDP 0x00000000D93EF018 000024 (v02 BOCHS ) May 14 00:00:43.907812 kernel: ACPI: XSDT 0x00000000D93EFF18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:00:43.907818 kernel: ACPI: FACP 0x00000000D93EFB18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907824 kernel: ACPI: DSDT 0x00000000D93ED018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907829 kernel: ACPI: APIC 0x00000000D93EFC98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907837 kernel: ACPI: PPTT 0x00000000D93EF098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907844 kernel: ACPI: GTDT 0x00000000D93EF818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907850 kernel: ACPI: MCFG 0x00000000D93EFA98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907856 kernel: ACPI: SPCR 0x00000000D93EF918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907862 kernel: ACPI: DBG2 0x00000000D93EF998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907868 kernel: ACPI: IORT 0x00000000D93EF198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:43.907874 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:00:43.907880 kernel: NUMA: Failed to initialise from firmware May 14 00:00:43.907886 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:00:43.907892 kernel: NUMA: NODE_DATA [mem 0xdc729800-0xdc72efff] May 14 00:00:43.907898 kernel: Zone ranges: May 14 00:00:43.907905 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:00:43.907911 kernel: DMA32 empty May 14 00:00:43.907917 kernel: Normal empty May 14 00:00:43.907923 kernel: Movable zone start for each node May 14 00:00:43.907929 kernel: Early memory node ranges May 14 00:00:43.907935 kernel: node 0: [mem 0x0000000040000000-0x00000000d93effff] May 14 00:00:43.907941 kernel: node 0: [mem 0x00000000d93f0000-0x00000000d972ffff] May 14 00:00:43.907947 kernel: node 0: [mem 0x00000000d9730000-0x00000000dcbfffff] May 14 00:00:43.907953 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 14 00:00:43.907959 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 00:00:43.907965 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:00:43.907971 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:00:43.907978 kernel: psci: probing for conduit method from ACPI. May 14 00:00:43.907984 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:00:43.907990 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:00:43.907999 kernel: psci: Trusted OS migration not required May 14 00:00:43.908006 kernel: psci: SMC Calling Convention v1.1 May 14 00:00:43.908012 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:00:43.908019 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 00:00:43.908027 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 00:00:43.908034 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:00:43.908040 kernel: Detected PIPT I-cache on CPU0 May 14 00:00:43.908046 kernel: CPU features: detected: GIC system register CPU interface May 14 00:00:43.908053 kernel: CPU features: detected: Hardware dirty bit management May 14 00:00:43.908059 kernel: CPU features: detected: Spectre-v4 May 14 00:00:43.908065 kernel: CPU features: detected: Spectre-BHB May 14 00:00:43.908072 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:00:43.908078 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:00:43.908085 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:00:43.908093 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:00:43.908100 kernel: alternatives: applying boot alternatives May 14 00:00:43.908107 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 14 00:00:43.908114 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:00:43.908122 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:00:43.908128 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:00:43.908135 kernel: Fallback order for Node 0: 0 May 14 00:00:43.908141 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:00:43.908147 kernel: Policy zone: DMA May 14 00:00:43.908154 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:00:43.908161 kernel: software IO TLB: area num 4. May 14 00:00:43.908168 kernel: software IO TLB: mapped [mem 0x00000000d2800000-0x00000000d6800000] (64MB) May 14 00:00:43.908175 kernel: Memory: 2385752K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 186536K reserved, 0K cma-reserved) May 14 00:00:43.908181 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:00:43.908188 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:00:43.908195 kernel: rcu: RCU event tracing is enabled. May 14 00:00:43.908201 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:00:43.908208 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:00:43.908214 kernel: Tracing variant of Tasks RCU enabled. May 14 00:00:43.908221 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:00:43.908227 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:00:43.908234 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:00:43.908242 kernel: GICv3: 256 SPIs implemented May 14 00:00:43.908248 kernel: GICv3: 0 Extended SPIs implemented May 14 00:00:43.908255 kernel: Root IRQ handler: gic_handle_irq May 14 00:00:43.908262 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 00:00:43.908268 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:00:43.908275 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:00:43.908281 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:00:43.908288 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 14 00:00:43.908295 kernel: GICv3: using LPI property table @0x00000000400f0000 May 14 00:00:43.908301 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 14 00:00:43.908314 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 00:00:43.908331 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:00:43.908338 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:00:43.908345 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:00:43.908358 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:00:43.908365 kernel: arm-pv: using stolen time PV May 14 00:00:43.908372 kernel: Console: colour dummy device 80x25 May 14 00:00:43.908379 kernel: ACPI: Core revision 20230628 May 14 00:00:43.908386 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:00:43.908393 kernel: pid_max: default: 32768 minimum: 301 May 14 00:00:43.908400 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 00:00:43.908408 kernel: landlock: Up and running. May 14 00:00:43.908415 kernel: SELinux: Initializing. May 14 00:00:43.908422 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:00:43.908437 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:00:43.908444 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:00:43.908451 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:00:43.908457 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:00:43.908464 kernel: rcu: Hierarchical SRCU implementation. May 14 00:00:43.908471 kernel: rcu: Max phase no-delay instances is 400. May 14 00:00:43.908479 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:00:43.908486 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:00:43.908493 kernel: Remapping and enabling EFI services. May 14 00:00:43.908499 kernel: smp: Bringing up secondary CPUs ... May 14 00:00:43.908506 kernel: Detected PIPT I-cache on CPU1 May 14 00:00:43.908512 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:00:43.908519 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 14 00:00:43.908526 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:00:43.908532 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:00:43.908538 kernel: Detected PIPT I-cache on CPU2 May 14 00:00:43.908547 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:00:43.908554 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 14 00:00:43.908565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:00:43.908574 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:00:43.908581 kernel: Detected PIPT I-cache on CPU3 May 14 00:00:43.908587 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:00:43.908594 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 14 00:00:43.908601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:00:43.908608 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:00:43.908615 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:00:43.908622 kernel: SMP: Total of 4 processors activated. May 14 00:00:43.908630 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:00:43.908637 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:00:43.908644 kernel: CPU features: detected: Common not Private translations May 14 00:00:43.908651 kernel: CPU features: detected: CRC32 instructions May 14 00:00:43.908658 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 00:00:43.908665 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:00:43.908674 kernel: CPU features: detected: LSE atomic instructions May 14 00:00:43.908680 kernel: CPU features: detected: Privileged Access Never May 14 00:00:43.908688 kernel: CPU features: detected: RAS Extension Support May 14 00:00:43.908694 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:00:43.908706 kernel: CPU: All CPU(s) started at EL1 May 14 00:00:43.908741 kernel: alternatives: applying system-wide alternatives May 14 00:00:43.908748 kernel: devtmpfs: initialized May 14 00:00:43.908755 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:00:43.908762 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:00:43.908770 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:00:43.908777 kernel: SMBIOS 3.0.0 present. May 14 00:00:43.908784 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 00:00:43.908790 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:00:43.908797 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:00:43.908804 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:00:43.908811 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:00:43.908818 kernel: audit: initializing netlink subsys (disabled) May 14 00:00:43.908825 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 May 14 00:00:43.908833 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:00:43.908840 kernel: cpuidle: using governor menu May 14 00:00:43.908847 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:00:43.908854 kernel: ASID allocator initialised with 32768 entries May 14 00:00:43.908861 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:00:43.908868 kernel: Serial: AMBA PL011 UART driver May 14 00:00:43.908875 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 00:00:43.908882 kernel: Modules: 0 pages in range for non-PLT usage May 14 00:00:43.908889 kernel: Modules: 509232 pages in range for PLT usage May 14 00:00:43.908897 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:00:43.908904 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 00:00:43.908911 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:00:43.908917 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 00:00:43.908924 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:00:43.908931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 00:00:43.908938 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:00:43.908945 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 00:00:43.908952 kernel: ACPI: Added _OSI(Module Device) May 14 00:00:43.908961 kernel: ACPI: Added _OSI(Processor Device) May 14 00:00:43.908968 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:00:43.908974 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:00:43.908982 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:00:43.908989 kernel: ACPI: Interpreter enabled May 14 00:00:43.908995 kernel: ACPI: Using GIC for interrupt routing May 14 00:00:43.909002 kernel: ACPI: MCFG table detected, 1 entries May 14 00:00:43.909009 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:00:43.909015 kernel: printk: console [ttyAMA0] enabled May 14 00:00:43.909024 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:00:43.909165 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:00:43.909245 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:00:43.909312 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:00:43.909388 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:00:43.909473 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:00:43.909485 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:00:43.909496 kernel: PCI host bridge to bus 0000:00 May 14 00:00:43.909568 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:00:43.909628 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:00:43.909687 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:00:43.909747 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:00:43.909847 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:00:43.909933 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:00:43.910006 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:00:43.910072 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:00:43.910137 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:00:43.910200 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:00:43.910265 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:00:43.910329 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:00:43.910396 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:00:43.910470 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:00:43.910530 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:00:43.910539 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:00:43.910546 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:00:43.910553 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:00:43.910560 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:00:43.910567 kernel: iommu: Default domain type: Translated May 14 00:00:43.910574 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:00:43.910584 kernel: efivars: Registered efivars operations May 14 00:00:43.910590 kernel: vgaarb: loaded May 14 00:00:43.910597 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:00:43.910604 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:00:43.910611 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:00:43.910618 kernel: pnp: PnP ACPI init May 14 00:00:43.910689 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:00:43.910699 kernel: pnp: PnP ACPI: found 1 devices May 14 00:00:43.910708 kernel: NET: Registered PF_INET protocol family May 14 00:00:43.910715 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:00:43.910722 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:00:43.910729 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:00:43.910736 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:00:43.910743 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 00:00:43.910750 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:00:43.910757 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:00:43.910764 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:00:43.910773 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:00:43.910780 kernel: PCI: CLS 0 bytes, default 64 May 14 00:00:43.910786 kernel: kvm [1]: HYP mode not available May 14 00:00:43.910793 kernel: Initialise system trusted keyrings May 14 00:00:43.910800 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:00:43.910807 kernel: Key type asymmetric registered May 14 00:00:43.910814 kernel: Asymmetric key parser 'x509' registered May 14 00:00:43.910821 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 00:00:43.910828 kernel: io scheduler mq-deadline registered May 14 00:00:43.910836 kernel: io scheduler kyber registered May 14 00:00:43.910843 kernel: io scheduler bfq registered May 14 00:00:43.910850 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:00:43.910857 kernel: ACPI: button: Power Button [PWRB] May 14 00:00:43.910864 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:00:43.910930 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:00:43.910939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:00:43.910946 kernel: thunder_xcv, ver 1.0 May 14 00:00:43.910953 kernel: thunder_bgx, ver 1.0 May 14 00:00:43.910962 kernel: nicpf, ver 1.0 May 14 00:00:43.910969 kernel: nicvf, ver 1.0 May 14 00:00:43.911041 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:00:43.911102 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:00:43 UTC (1747180843) May 14 00:00:43.911111 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:00:43.911118 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:00:43.911125 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 00:00:43.911132 kernel: watchdog: Hard watchdog permanently disabled May 14 00:00:43.911141 kernel: NET: Registered PF_INET6 protocol family May 14 00:00:43.911148 kernel: Segment Routing with IPv6 May 14 00:00:43.911155 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:00:43.911162 kernel: NET: Registered PF_PACKET protocol family May 14 00:00:43.911169 kernel: Key type dns_resolver registered May 14 00:00:43.911176 kernel: registered taskstats version 1 May 14 00:00:43.911183 kernel: Loading compiled-in X.509 certificates May 14 00:00:43.911190 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 14 00:00:43.911197 kernel: Key type .fscrypt registered May 14 00:00:43.911205 kernel: Key type fscrypt-provisioning registered May 14 00:00:43.911212 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:00:43.911219 kernel: ima: Allocated hash algorithm: sha1 May 14 00:00:43.911225 kernel: ima: No architecture policies found May 14 00:00:43.911232 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:00:43.911239 kernel: clk: Disabling unused clocks May 14 00:00:43.911246 kernel: Freeing unused kernel memory: 38464K May 14 00:00:43.911253 kernel: Run /init as init process May 14 00:00:43.911260 kernel: with arguments: May 14 00:00:43.911268 kernel: /init May 14 00:00:43.911274 kernel: with environment: May 14 00:00:43.911281 kernel: HOME=/ May 14 00:00:43.911288 kernel: TERM=linux May 14 00:00:43.911295 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:00:43.911302 systemd[1]: Successfully made /usr/ read-only. May 14 00:00:43.911312 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:00:43.911320 systemd[1]: Detected virtualization kvm. May 14 00:00:43.911329 systemd[1]: Detected architecture arm64. May 14 00:00:43.911336 systemd[1]: Running in initrd. May 14 00:00:43.911344 systemd[1]: No hostname configured, using default hostname. May 14 00:00:43.911359 systemd[1]: Hostname set to . May 14 00:00:43.911368 systemd[1]: Initializing machine ID from VM UUID. May 14 00:00:43.911375 systemd[1]: Queued start job for default target initrd.target. May 14 00:00:43.911383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:00:43.911391 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:00:43.911401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 00:00:43.911409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:00:43.911417 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 00:00:43.911435 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 00:00:43.911445 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 00:00:43.911453 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 00:00:43.911461 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:00:43.911471 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:00:43.911478 systemd[1]: Reached target paths.target - Path Units. May 14 00:00:43.911486 systemd[1]: Reached target slices.target - Slice Units. May 14 00:00:43.911495 systemd[1]: Reached target swap.target - Swaps. May 14 00:00:43.911502 systemd[1]: Reached target timers.target - Timer Units. May 14 00:00:43.911510 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:00:43.911517 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:00:43.911525 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 00:00:43.911534 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 00:00:43.911542 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:00:43.911550 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:00:43.911557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:00:43.911565 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:00:43.911572 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 00:00:43.911580 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:00:43.911587 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 00:00:43.911595 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:00:43.911604 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:00:43.911612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:00:43.911620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:43.911628 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:00:43.911635 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 00:00:43.911643 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:00:43.911653 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:00:43.911678 systemd-journald[236]: Collecting audit messages is disabled. May 14 00:00:43.911698 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:00:43.911706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:43.911714 kernel: Bridge firewalling registered May 14 00:00:43.911721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:43.911729 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:00:43.911737 systemd-journald[236]: Journal started May 14 00:00:43.911757 systemd-journald[236]: Runtime Journal (/run/log/journal/d4ec4f3d3fce4627816be551a5991182) is 5.9M, max 47.3M, 41.4M free. May 14 00:00:43.883249 systemd-modules-load[237]: Inserted module 'overlay' May 14 00:00:43.913523 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:00:43.906502 systemd-modules-load[237]: Inserted module 'br_netfilter' May 14 00:00:43.914682 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:00:43.917781 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:00:43.919454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:00:43.927528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:00:43.932653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:00:43.934593 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:43.938709 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:43.942574 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 00:00:43.943663 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:00:43.955561 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:00:43.964213 dracut-cmdline[277]: dracut-dracut-053 May 14 00:00:43.966822 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 14 00:00:43.988440 systemd-resolved[281]: Positive Trust Anchors: May 14 00:00:43.988459 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:00:43.988490 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:00:43.993347 systemd-resolved[281]: Defaulting to hostname 'linux'. May 14 00:00:43.994330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:00:43.997838 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:00:44.039467 kernel: SCSI subsystem initialized May 14 00:00:44.044457 kernel: Loading iSCSI transport class v2.0-870. May 14 00:00:44.052456 kernel: iscsi: registered transport (tcp) May 14 00:00:44.066654 kernel: iscsi: registered transport (qla4xxx) May 14 00:00:44.066687 kernel: QLogic iSCSI HBA Driver May 14 00:00:44.111079 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 00:00:44.113526 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 00:00:44.147759 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:00:44.147841 kernel: device-mapper: uevent: version 1.0.3 May 14 00:00:44.149008 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 00:00:44.195466 kernel: raid6: neonx8 gen() 15786 MB/s May 14 00:00:44.212455 kernel: raid6: neonx4 gen() 15760 MB/s May 14 00:00:44.229494 kernel: raid6: neonx2 gen() 13277 MB/s May 14 00:00:44.246454 kernel: raid6: neonx1 gen() 10419 MB/s May 14 00:00:44.263451 kernel: raid6: int64x8 gen() 6788 MB/s May 14 00:00:44.280456 kernel: raid6: int64x4 gen() 7346 MB/s May 14 00:00:44.297451 kernel: raid6: int64x2 gen() 6104 MB/s May 14 00:00:44.314558 kernel: raid6: int64x1 gen() 5047 MB/s May 14 00:00:44.314571 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s May 14 00:00:44.332517 kernel: raid6: .... xor() 12009 MB/s, rmw enabled May 14 00:00:44.332541 kernel: raid6: using neon recovery algorithm May 14 00:00:44.337845 kernel: xor: measuring software checksum speed May 14 00:00:44.337860 kernel: 8regs : 21607 MB/sec May 14 00:00:44.338502 kernel: 32regs : 21681 MB/sec May 14 00:00:44.339702 kernel: arm64_neon : 27775 MB/sec May 14 00:00:44.339714 kernel: xor: using function: arm64_neon (27775 MB/sec) May 14 00:00:44.392462 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 00:00:44.403507 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 00:00:44.406096 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:00:44.433801 systemd-udevd[465]: Using default interface naming scheme 'v255'. May 14 00:00:44.437540 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:00:44.440036 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 00:00:44.467354 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation May 14 00:00:44.500484 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:00:44.502962 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:00:44.556030 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:00:44.560101 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 00:00:44.582925 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 00:00:44.586095 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:00:44.587586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:00:44.590235 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:00:44.593217 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 00:00:44.611117 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 00:00:44.613475 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:00:44.614677 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 00:00:44.625944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:00:44.625991 kernel: GPT:9289727 != 19775487 May 14 00:00:44.626359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:00:44.626502 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:44.629558 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:44.633560 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:00:44.633582 kernel: GPT:9289727 != 19775487 May 14 00:00:44.633597 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:00:44.631286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:00:44.631456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:44.635446 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:44.639082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:44.638655 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:44.659452 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (524) May 14 00:00:44.666469 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (527) May 14 00:00:44.665147 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 00:00:44.667673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:44.677326 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 00:00:44.692548 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 00:00:44.693782 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 00:00:44.703169 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:00:44.705226 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 00:00:44.707121 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:44.723198 disk-uuid[554]: Primary Header is updated. May 14 00:00:44.723198 disk-uuid[554]: Secondary Entries is updated. May 14 00:00:44.723198 disk-uuid[554]: Secondary Header is updated. May 14 00:00:44.727610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:44.734397 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:45.738333 disk-uuid[559]: The operation has completed successfully. May 14 00:00:45.739589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:45.760533 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:00:45.760631 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 00:00:45.789633 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 00:00:45.805447 sh[575]: Success May 14 00:00:45.822688 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:00:45.850923 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 00:00:45.853834 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 00:00:45.867747 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 00:00:45.877728 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 14 00:00:45.877778 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 00:00:45.877797 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 00:00:45.879504 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 00:00:45.879522 kernel: BTRFS info (device dm-0): using free space tree May 14 00:00:45.883576 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 00:00:45.884960 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 00:00:45.885774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 00:00:45.888404 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 00:00:45.915178 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:45.915237 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:00:45.915247 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:45.918452 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:45.922457 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:45.925212 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 00:00:45.927566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 00:00:46.000137 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:00:46.004607 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:00:46.035728 ignition[665]: Ignition 2.20.0 May 14 00:00:46.035737 ignition[665]: Stage: fetch-offline May 14 00:00:46.035778 ignition[665]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:46.035787 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:46.035948 ignition[665]: parsed url from cmdline: "" May 14 00:00:46.035952 ignition[665]: no config URL provided May 14 00:00:46.035957 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:00:46.035978 ignition[665]: no config at "/usr/lib/ignition/user.ign" May 14 00:00:46.036003 ignition[665]: op(1): [started] loading QEMU firmware config module May 14 00:00:46.036008 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:00:46.044840 ignition[665]: op(1): [finished] loading QEMU firmware config module May 14 00:00:46.046181 systemd-networkd[763]: lo: Link UP May 14 00:00:46.046185 systemd-networkd[763]: lo: Gained carrier May 14 00:00:46.047071 systemd-networkd[763]: Enumeration completed May 14 00:00:46.047251 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:00:46.048030 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:46.048033 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:00:46.048656 systemd-networkd[763]: eth0: Link UP May 14 00:00:46.048659 systemd-networkd[763]: eth0: Gained carrier May 14 00:00:46.048666 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:46.049654 systemd[1]: Reached target network.target - Network. May 14 00:00:46.065491 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:00:46.094795 ignition[665]: parsing config with SHA512: 8c341f6ee6791ec4f79ecacf502df4903f115a71d65cfb069ec3c1d540310ad9bd10403e9c8e23087659df04b01cf7a1f3ec7be309cdbcae14c0e538139939fc May 14 00:00:46.099397 unknown[665]: fetched base config from "system" May 14 00:00:46.099407 unknown[665]: fetched user config from "qemu" May 14 00:00:46.099782 ignition[665]: fetch-offline: fetch-offline passed May 14 00:00:46.101821 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:00:46.099851 ignition[665]: Ignition finished successfully May 14 00:00:46.103366 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:00:46.105117 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 00:00:46.133317 ignition[772]: Ignition 2.20.0 May 14 00:00:46.133328 ignition[772]: Stage: kargs May 14 00:00:46.133523 ignition[772]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:46.133534 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:46.134381 ignition[772]: kargs: kargs passed May 14 00:00:46.137093 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 00:00:46.134443 ignition[772]: Ignition finished successfully May 14 00:00:46.139041 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 00:00:46.166107 ignition[781]: Ignition 2.20.0 May 14 00:00:46.166119 ignition[781]: Stage: disks May 14 00:00:46.166267 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:46.166276 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:46.168593 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 00:00:46.167146 ignition[781]: disks: disks passed May 14 00:00:46.170462 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 00:00:46.167190 ignition[781]: Ignition finished successfully May 14 00:00:46.171999 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 00:00:46.173537 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:00:46.175311 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:00:46.176835 systemd[1]: Reached target basic.target - Basic System. May 14 00:00:46.179543 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 00:00:46.203483 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 00:00:46.207487 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 00:00:46.211608 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 00:00:46.269445 kernel: EXT4-fs (vda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 14 00:00:46.269762 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 00:00:46.270960 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 00:00:46.273178 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:00:46.274965 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 00:00:46.275932 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 00:00:46.275974 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:00:46.275996 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:00:46.290901 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 00:00:46.293593 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 00:00:46.299976 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) May 14 00:00:46.300000 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:46.300077 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:00:46.300100 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:46.302480 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:46.303636 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:00:46.344413 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:00:46.348762 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory May 14 00:00:46.352693 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:00:46.356321 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:00:46.424565 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 00:00:46.426488 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 00:00:46.428024 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 00:00:46.452457 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:46.468711 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 00:00:46.480333 ignition[912]: INFO : Ignition 2.20.0 May 14 00:00:46.480333 ignition[912]: INFO : Stage: mount May 14 00:00:46.481927 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:46.481927 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:46.481927 ignition[912]: INFO : mount: mount passed May 14 00:00:46.481927 ignition[912]: INFO : Ignition finished successfully May 14 00:00:46.485417 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 00:00:46.488269 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 00:00:46.876570 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 00:00:46.879115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:00:46.897463 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) May 14 00:00:46.899931 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:00:46.899976 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:00:46.899988 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:46.903449 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:46.904153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:00:46.930971 ignition[942]: INFO : Ignition 2.20.0 May 14 00:00:46.930971 ignition[942]: INFO : Stage: files May 14 00:00:46.932618 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:46.932618 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:46.932618 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 14 00:00:46.935752 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:00:46.935752 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:00:46.938706 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:00:46.939984 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:00:46.939984 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:00:46.939403 unknown[942]: wrote ssh authorized keys file for user: core May 14 00:00:46.943650 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:00:46.943650 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 00:00:47.051967 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:00:47.189730 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:00:47.189730 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:00:47.193537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 00:00:47.452912 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 00:00:47.483604 systemd-networkd[763]: eth0: Gained IPv6LL May 14 00:00:47.754258 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:00:47.754258 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 00:00:47.757842 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:00:47.757842 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:00:47.757842 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 00:00:47.757842 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 00:00:47.757842 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:00:47.757842 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:00:47.757842 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 00:00:47.757842 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:00:47.776456 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:00:47.779705 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:00:47.782356 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:00:47.782356 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 00:00:47.782356 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:00:47.782356 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:00:47.782356 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:00:47.782356 ignition[942]: INFO : files: files passed May 14 00:00:47.782356 ignition[942]: INFO : Ignition finished successfully May 14 00:00:47.782678 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 00:00:47.785652 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 00:00:47.787600 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 00:00:47.804132 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 14 00:00:47.802702 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:00:47.802797 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 00:00:47.807735 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:47.807735 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:47.810674 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:47.811341 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:00:47.813257 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 00:00:47.815785 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 00:00:47.867417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:00:47.867539 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 00:00:47.869755 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 00:00:47.871463 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 00:00:47.873311 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 00:00:47.874169 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 00:00:47.896821 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:00:47.899229 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 00:00:47.920284 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 00:00:47.921530 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:00:47.923550 systemd[1]: Stopped target timers.target - Timer Units. May 14 00:00:47.925293 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:00:47.925438 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:00:47.927884 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 00:00:47.929821 systemd[1]: Stopped target basic.target - Basic System. May 14 00:00:47.931374 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 00:00:47.933051 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:00:47.934921 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 00:00:47.936853 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 00:00:47.938598 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:00:47.940489 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 00:00:47.942406 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 00:00:47.944084 systemd[1]: Stopped target swap.target - Swaps. May 14 00:00:47.945550 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:00:47.945676 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 00:00:47.947909 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 00:00:47.949786 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:00:47.951622 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 00:00:47.955511 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:00:47.956761 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:00:47.956883 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 00:00:47.959706 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:00:47.959830 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:00:47.961724 systemd[1]: Stopped target paths.target - Path Units. May 14 00:00:47.963284 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:00:47.967489 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:00:47.968776 systemd[1]: Stopped target slices.target - Slice Units. May 14 00:00:47.970846 systemd[1]: Stopped target sockets.target - Socket Units. May 14 00:00:47.972448 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:00:47.972548 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:00:47.974109 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:00:47.974188 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:00:47.975675 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:00:47.975789 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:00:47.977567 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:00:47.977667 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 00:00:47.980028 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 00:00:47.982652 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 00:00:47.983759 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:00:47.983890 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:00:47.985657 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:00:47.985757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:00:47.992627 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:00:47.992721 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 00:00:48.002362 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:00:48.003736 ignition[998]: INFO : Ignition 2.20.0 May 14 00:00:48.003736 ignition[998]: INFO : Stage: umount May 14 00:00:48.005571 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:48.005571 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:48.005571 ignition[998]: INFO : umount: umount passed May 14 00:00:48.005571 ignition[998]: INFO : Ignition finished successfully May 14 00:00:48.007755 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:00:48.007853 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 00:00:48.009791 systemd[1]: Stopped target network.target - Network. May 14 00:00:48.010816 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:00:48.010897 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 00:00:48.012481 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:00:48.012532 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 00:00:48.014189 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:00:48.014239 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 00:00:48.015719 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 00:00:48.015761 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 00:00:48.017668 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 00:00:48.019421 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 00:00:48.023611 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:00:48.023721 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 00:00:48.026958 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 00:00:48.027194 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:00:48.027284 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 00:00:48.029982 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 00:00:48.030671 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:00:48.030735 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 00:00:48.033634 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 00:00:48.034528 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:00:48.034604 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:00:48.036809 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:00:48.036861 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:48.039485 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:00:48.039529 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 00:00:48.041578 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 00:00:48.041626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:00:48.044656 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:00:48.047222 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:00:48.047279 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 00:00:48.066338 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:00:48.066477 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 00:00:48.069095 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:00:48.069250 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:00:48.071570 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:00:48.071608 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 00:00:48.073358 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:00:48.073390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:00:48.075158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:00:48.075210 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 00:00:48.077821 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:00:48.077873 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 00:00:48.080612 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:00:48.080660 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:48.083560 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 00:00:48.084677 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:00:48.084738 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:00:48.087532 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 00:00:48.087578 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:00:48.089857 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:00:48.089905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:00:48.091875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:00:48.091923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:48.095713 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 00:00:48.095769 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 00:00:48.096615 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:00:48.096697 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 00:00:48.098667 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:00:48.098760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 00:00:48.101446 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:00:48.101525 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 00:00:48.103769 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 00:00:48.106194 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 00:00:48.127581 systemd[1]: Switching root. May 14 00:00:48.153631 systemd-journald[236]: Journal stopped May 14 00:00:48.994063 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 14 00:00:48.994117 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:00:48.994132 kernel: SELinux: policy capability open_perms=1 May 14 00:00:48.994145 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:00:48.994155 kernel: SELinux: policy capability always_check_network=0 May 14 00:00:48.994164 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:00:48.994173 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:00:48.994183 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:00:48.994192 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:00:48.994201 kernel: audit: type=1403 audit(1747180848.365:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:00:48.994214 systemd[1]: Successfully loaded SELinux policy in 35.964ms. May 14 00:00:48.994237 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.454ms. May 14 00:00:48.994249 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:00:48.994260 systemd[1]: Detected virtualization kvm. May 14 00:00:48.994272 systemd[1]: Detected architecture arm64. May 14 00:00:48.994282 systemd[1]: Detected first boot. May 14 00:00:48.994293 systemd[1]: Initializing machine ID from VM UUID. May 14 00:00:48.994303 zram_generator::config[1045]: No configuration found. May 14 00:00:48.994324 kernel: NET: Registered PF_VSOCK protocol family May 14 00:00:48.994339 systemd[1]: Populated /etc with preset unit settings. May 14 00:00:48.994351 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 00:00:48.994361 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:00:48.994372 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 00:00:48.994382 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:00:48.994393 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 00:00:48.994403 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 00:00:48.994413 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 00:00:48.994437 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 00:00:48.994449 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 00:00:48.994461 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 00:00:48.994472 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 00:00:48.994492 systemd[1]: Created slice user.slice - User and Session Slice. May 14 00:00:48.994505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:00:48.994516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:00:48.994527 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 00:00:48.994538 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 00:00:48.994551 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 00:00:48.994563 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:00:48.994574 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 00:00:48.994584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:00:48.994595 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 00:00:48.994606 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 00:00:48.994629 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 00:00:48.994640 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 00:00:48.994654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:00:48.994664 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:00:48.994675 systemd[1]: Reached target slices.target - Slice Units. May 14 00:00:48.994686 systemd[1]: Reached target swap.target - Swaps. May 14 00:00:48.994697 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 00:00:48.994707 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 00:00:48.994717 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 00:00:48.994728 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:00:48.994739 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:00:48.994752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:00:48.994763 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 00:00:48.994773 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 00:00:48.994784 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 00:00:48.994795 systemd[1]: Mounting media.mount - External Media Directory... May 14 00:00:48.994805 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 00:00:48.994815 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 00:00:48.994825 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 00:00:48.994835 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:00:48.994866 systemd[1]: Reached target machines.target - Containers. May 14 00:00:48.994876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 00:00:48.994886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:00:48.994897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:00:48.994907 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 00:00:48.994917 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:00:48.994928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:00:48.994938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:00:48.994949 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 00:00:48.994960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:00:48.994972 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:00:48.994982 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:00:48.994994 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 00:00:48.995004 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:00:48.995015 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:00:48.995025 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:00:48.995037 kernel: fuse: init (API version 7.39) May 14 00:00:48.995047 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:00:48.995057 kernel: loop: module loaded May 14 00:00:48.995066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:00:48.995076 kernel: ACPI: bus type drm_connector registered May 14 00:00:48.995085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 00:00:48.995096 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 00:00:48.995106 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 00:00:48.995135 systemd-journald[1113]: Collecting audit messages is disabled. May 14 00:00:48.995158 systemd-journald[1113]: Journal started May 14 00:00:48.995179 systemd-journald[1113]: Runtime Journal (/run/log/journal/d4ec4f3d3fce4627816be551a5991182) is 5.9M, max 47.3M, 41.4M free. May 14 00:00:48.775736 systemd[1]: Queued start job for default target multi-user.target. May 14 00:00:48.799332 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 00:00:48.799727 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:00:48.998599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:00:49.001446 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:00:49.002813 systemd[1]: Stopped verity-setup.service. May 14 00:00:49.009415 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:00:49.010122 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 00:00:49.011283 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 00:00:49.012518 systemd[1]: Mounted media.mount - External Media Directory. May 14 00:00:49.013578 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 00:00:49.014710 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 00:00:49.015936 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 00:00:49.017172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:00:49.018680 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:00:49.018865 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 00:00:49.020328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:00:49.020550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:00:49.021895 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:00:49.022075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:00:49.023435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:00:49.023604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:00:49.025155 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:00:49.025345 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 00:00:49.026744 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:00:49.027532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:00:49.028926 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:00:49.030386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 00:00:49.033460 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 00:00:49.035062 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 00:00:49.042450 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 00:00:49.050325 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 00:00:49.053194 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 00:00:49.055535 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 00:00:49.056655 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:00:49.056713 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:00:49.058708 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 00:00:49.068372 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 00:00:49.070513 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 00:00:49.071584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:00:49.072777 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 00:00:49.075304 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 00:00:49.076615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:00:49.080552 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 00:00:49.081682 systemd-journald[1113]: Time spent on flushing to /var/log/journal/d4ec4f3d3fce4627816be551a5991182 is 11.464ms for 864 entries. May 14 00:00:49.081682 systemd-journald[1113]: System Journal (/var/log/journal/d4ec4f3d3fce4627816be551a5991182) is 8M, max 195.6M, 187.6M free. May 14 00:00:49.098847 systemd-journald[1113]: Received client request to flush runtime journal. May 14 00:00:49.082871 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:00:49.083918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:00:49.086720 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 00:00:49.100243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:00:49.107718 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:00:49.109508 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 00:00:49.111572 kernel: loop0: detected capacity change from 0 to 103832 May 14 00:00:49.112338 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 00:00:49.115636 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 00:00:49.117544 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 00:00:49.126974 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 00:00:49.128773 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. May 14 00:00:49.139816 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:00:49.128788 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. May 14 00:00:49.141486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:49.144588 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:00:49.148368 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 00:00:49.149998 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 00:00:49.162753 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 00:00:49.165488 kernel: loop1: detected capacity change from 0 to 194096 May 14 00:00:49.166137 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 00:00:49.173462 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 00:00:49.187408 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 00:00:49.190054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:00:49.206098 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 00:00:49.209451 kernel: loop2: detected capacity change from 0 to 126448 May 14 00:00:49.216476 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 14 00:00:49.216495 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 14 00:00:49.222519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:00:49.238492 kernel: loop3: detected capacity change from 0 to 103832 May 14 00:00:49.243446 kernel: loop4: detected capacity change from 0 to 194096 May 14 00:00:49.250462 kernel: loop5: detected capacity change from 0 to 126448 May 14 00:00:49.254059 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 00:00:49.254486 (sd-merge)[1191]: Merged extensions into '/usr'. May 14 00:00:49.258040 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... May 14 00:00:49.258058 systemd[1]: Reloading... May 14 00:00:49.312643 zram_generator::config[1219]: No configuration found. May 14 00:00:49.379805 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:00:49.412764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:49.463716 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:00:49.464010 systemd[1]: Reloading finished in 205 ms. May 14 00:00:49.490462 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 00:00:49.491861 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 00:00:49.504767 systemd[1]: Starting ensure-sysext.service... May 14 00:00:49.506712 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:00:49.518796 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... May 14 00:00:49.518814 systemd[1]: Reloading... May 14 00:00:49.529035 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:00:49.529239 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:00:49.529864 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:00:49.530055 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 14 00:00:49.530101 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 14 00:00:49.533332 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:00:49.533466 systemd-tmpfiles[1254]: Skipping /boot May 14 00:00:49.542596 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:00:49.542751 systemd-tmpfiles[1254]: Skipping /boot May 14 00:00:49.575456 zram_generator::config[1280]: No configuration found. May 14 00:00:49.655780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:49.706088 systemd[1]: Reloading finished in 186 ms. May 14 00:00:49.717193 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 00:00:49.719496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:00:49.736206 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:00:49.738571 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 00:00:49.750443 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 00:00:49.753560 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:00:49.761607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:00:49.763973 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 00:00:49.774657 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 00:00:49.780151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:00:49.785010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:00:49.790143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:00:49.795535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:00:49.797820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:00:49.798017 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:00:49.803536 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 00:00:49.810431 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 00:00:49.812484 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:00:49.813621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:00:49.815674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:00:49.815832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:00:49.817511 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:00:49.817657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:00:49.823165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:00:49.825698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:00:49.827923 systemd-udevd[1324]: Using default interface naming scheme 'v255'. May 14 00:00:49.833741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:00:49.838238 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:00:49.839482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:00:49.839609 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:00:49.843721 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 00:00:49.845064 augenrules[1362]: No rules May 14 00:00:49.847048 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 00:00:49.848980 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:00:49.849193 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:00:49.852034 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 00:00:49.853928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:00:49.854091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:00:49.855721 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:00:49.857480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:00:49.857700 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:00:49.859380 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:00:49.859565 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:00:49.860966 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 00:00:49.873160 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:00:49.874362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:00:49.878717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:00:49.883709 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:00:49.887453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:00:49.892653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:00:49.893759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:00:49.893800 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:00:49.896824 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:00:49.898995 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:00:49.899700 systemd[1]: Finished ensure-sysext.service. May 14 00:00:49.900897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:00:49.901062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:00:49.902497 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:00:49.902662 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:00:49.904013 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:00:49.905649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:00:49.908866 systemd-resolved[1323]: Positive Trust Anchors: May 14 00:00:49.909044 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:00:49.909076 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:00:49.923020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:00:49.923221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:00:49.924802 systemd-resolved[1323]: Defaulting to hostname 'linux'. May 14 00:00:49.925755 augenrules[1392]: /sbin/augenrules: No change May 14 00:00:49.935401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1371) May 14 00:00:49.927603 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 00:00:49.928566 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:00:49.928625 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:00:49.937735 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 00:00:49.939512 augenrules[1423]: No rules May 14 00:00:49.940647 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:00:49.949655 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:00:49.950068 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:00:49.963180 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:00:49.966261 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:00:49.971672 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 00:00:49.994499 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 00:00:50.022211 systemd-networkd[1398]: lo: Link UP May 14 00:00:50.022220 systemd-networkd[1398]: lo: Gained carrier May 14 00:00:50.023597 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 00:00:50.025047 systemd-networkd[1398]: Enumeration completed May 14 00:00:50.025240 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:00:50.026930 systemd[1]: Reached target network.target - Network. May 14 00:00:50.028313 systemd[1]: Reached target time-set.target - System Time Set. May 14 00:00:50.029247 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:50.029255 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:00:50.029793 systemd-networkd[1398]: eth0: Link UP May 14 00:00:50.029800 systemd-networkd[1398]: eth0: Gained carrier May 14 00:00:50.029813 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:50.031492 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 00:00:50.034541 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 00:00:50.050593 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:00:50.051224 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 14 00:00:50.471502 systemd-timesyncd[1422]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:00:50.471560 systemd-timesyncd[1422]: Initial clock synchronization to Wed 2025-05-14 00:00:50.471414 UTC. May 14 00:00:50.471608 systemd-resolved[1323]: Clock change detected. Flushing caches. May 14 00:00:50.482891 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 00:00:50.487600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:50.500071 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 00:00:50.503005 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 00:00:50.525725 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:00:50.538971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:50.569741 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 00:00:50.571193 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:00:50.572537 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:00:50.573663 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 00:00:50.574906 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 00:00:50.576323 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 00:00:50.577546 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 00:00:50.578814 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 00:00:50.580001 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:00:50.580039 systemd[1]: Reached target paths.target - Path Units. May 14 00:00:50.580913 systemd[1]: Reached target timers.target - Timer Units. May 14 00:00:50.582807 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 00:00:50.585228 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 00:00:50.588462 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 00:00:50.589937 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 00:00:50.591160 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 00:00:50.594434 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 00:00:50.595960 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 00:00:50.598317 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 00:00:50.599990 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 00:00:50.601119 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:00:50.602042 systemd[1]: Reached target basic.target - Basic System. May 14 00:00:50.602968 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 00:00:50.602996 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 00:00:50.603905 systemd[1]: Starting containerd.service - containerd container runtime... May 14 00:00:50.605662 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:00:50.605958 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 00:00:50.607881 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 00:00:50.612773 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 00:00:50.613804 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 00:00:50.614851 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 00:00:50.618481 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 00:00:50.621390 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 00:00:50.624021 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 00:00:50.627483 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 00:00:50.629075 jq[1458]: false May 14 00:00:50.629513 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:00:50.629993 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:00:50.631909 systemd[1]: Starting update-engine.service - Update Engine... May 14 00:00:50.636434 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 00:00:50.638623 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 00:00:50.642208 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:00:50.642413 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 00:00:50.645165 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:00:50.645347 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 00:00:50.649927 jq[1473]: true May 14 00:00:50.650081 dbus-daemon[1457]: [system] SELinux support is enabled May 14 00:00:50.650895 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 00:00:50.652296 extend-filesystems[1459]: Found loop3 May 14 00:00:50.652296 extend-filesystems[1459]: Found loop4 May 14 00:00:50.652296 extend-filesystems[1459]: Found loop5 May 14 00:00:50.652296 extend-filesystems[1459]: Found vda May 14 00:00:50.652296 extend-filesystems[1459]: Found vda1 May 14 00:00:50.656856 extend-filesystems[1459]: Found vda2 May 14 00:00:50.656856 extend-filesystems[1459]: Found vda3 May 14 00:00:50.656856 extend-filesystems[1459]: Found usr May 14 00:00:50.656856 extend-filesystems[1459]: Found vda4 May 14 00:00:50.656856 extend-filesystems[1459]: Found vda6 May 14 00:00:50.656856 extend-filesystems[1459]: Found vda7 May 14 00:00:50.656856 extend-filesystems[1459]: Found vda9 May 14 00:00:50.656856 extend-filesystems[1459]: Checking size of /dev/vda9 May 14 00:00:50.655116 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:00:50.655343 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 00:00:50.673817 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:00:50.673874 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 00:00:50.674467 jq[1484]: true May 14 00:00:50.674871 tar[1477]: linux-arm64/helm May 14 00:00:50.677810 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:00:50.677841 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 00:00:50.685719 update_engine[1468]: I20250514 00:00:50.683511 1468 main.cc:92] Flatcar Update Engine starting May 14 00:00:50.685985 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 00:00:50.688794 systemd[1]: Started update-engine.service - Update Engine. May 14 00:00:50.688914 update_engine[1468]: I20250514 00:00:50.688869 1468 update_check_scheduler.cc:74] Next update check in 4m21s May 14 00:00:50.692947 extend-filesystems[1459]: Resized partition /dev/vda9 May 14 00:00:50.691554 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 00:00:50.693898 extend-filesystems[1496]: resize2fs 1.47.2 (1-Jan-2025) May 14 00:00:50.704807 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:00:50.704851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1369) May 14 00:00:50.728798 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:00:50.743685 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:00:50.743685 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:00:50.743685 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:00:50.749815 extend-filesystems[1459]: Resized filesystem in /dev/vda9 May 14 00:00:50.749013 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:00:50.749197 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 00:00:50.760462 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:00:50.763022 systemd-logind[1466]: New seat seat0. May 14 00:00:50.766740 systemd[1]: Started systemd-logind.service - User Login Management. May 14 00:00:50.779329 bash[1511]: Updated "/home/core/.ssh/authorized_keys" May 14 00:00:50.779345 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 00:00:50.781881 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 00:00:50.807472 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:00:50.920504 containerd[1485]: time="2025-05-14T00:00:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 00:00:50.922014 containerd[1485]: time="2025-05-14T00:00:50.921954530Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 00:00:50.935584 containerd[1485]: time="2025-05-14T00:00:50.935464010Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.24µs" May 14 00:00:50.935584 containerd[1485]: time="2025-05-14T00:00:50.935575730Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 00:00:50.935729 containerd[1485]: time="2025-05-14T00:00:50.935654730Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 00:00:50.935909 containerd[1485]: time="2025-05-14T00:00:50.935871410Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 00:00:50.935909 containerd[1485]: time="2025-05-14T00:00:50.935904010Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 00:00:50.935952 containerd[1485]: time="2025-05-14T00:00:50.935932970Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:00:50.936073 containerd[1485]: time="2025-05-14T00:00:50.936043890Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:00:50.936073 containerd[1485]: time="2025-05-14T00:00:50.936066890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:00:50.936512 containerd[1485]: time="2025-05-14T00:00:50.936478650Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:00:50.936541 containerd[1485]: time="2025-05-14T00:00:50.936507650Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:00:50.936541 containerd[1485]: time="2025-05-14T00:00:50.936536170Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:00:50.936580 containerd[1485]: time="2025-05-14T00:00:50.936544690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 00:00:50.936654 containerd[1485]: time="2025-05-14T00:00:50.936633290Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 00:00:50.937068 containerd[1485]: time="2025-05-14T00:00:50.937036730Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:00:50.937102 containerd[1485]: time="2025-05-14T00:00:50.937078130Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:00:50.937102 containerd[1485]: time="2025-05-14T00:00:50.937089490Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 00:00:50.938373 containerd[1485]: time="2025-05-14T00:00:50.938299210Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 00:00:50.938972 containerd[1485]: time="2025-05-14T00:00:50.938585130Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 00:00:50.938972 containerd[1485]: time="2025-05-14T00:00:50.938680930Z" level=info msg="metadata content store policy set" policy=shared May 14 00:00:50.988587 containerd[1485]: time="2025-05-14T00:00:50.988535450Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 00:00:50.988729 containerd[1485]: time="2025-05-14T00:00:50.988605570Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 00:00:50.988729 containerd[1485]: time="2025-05-14T00:00:50.988620850Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 00:00:50.988729 containerd[1485]: time="2025-05-14T00:00:50.988640370Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 00:00:50.988729 containerd[1485]: time="2025-05-14T00:00:50.988666450Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 00:00:50.988729 containerd[1485]: time="2025-05-14T00:00:50.988679370Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 00:00:50.988729 containerd[1485]: time="2025-05-14T00:00:50.988695410Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 00:00:50.988729 containerd[1485]: time="2025-05-14T00:00:50.988714730Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 00:00:50.988729 containerd[1485]: time="2025-05-14T00:00:50.988726570Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 00:00:50.988858 containerd[1485]: time="2025-05-14T00:00:50.988738050Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 00:00:50.988858 containerd[1485]: time="2025-05-14T00:00:50.988748170Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 00:00:50.988858 containerd[1485]: time="2025-05-14T00:00:50.988759930Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 00:00:50.989006 containerd[1485]: time="2025-05-14T00:00:50.988914450Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 00:00:50.989006 containerd[1485]: time="2025-05-14T00:00:50.988944370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 00:00:50.989006 containerd[1485]: time="2025-05-14T00:00:50.988958490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 00:00:50.989006 containerd[1485]: time="2025-05-14T00:00:50.988969730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 00:00:50.989006 containerd[1485]: time="2025-05-14T00:00:50.988980850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 00:00:50.989006 containerd[1485]: time="2025-05-14T00:00:50.988991890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 00:00:50.989006 containerd[1485]: time="2025-05-14T00:00:50.989004290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 00:00:50.989224 containerd[1485]: time="2025-05-14T00:00:50.989015170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 00:00:50.989224 containerd[1485]: time="2025-05-14T00:00:50.989028090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 00:00:50.989224 containerd[1485]: time="2025-05-14T00:00:50.989039890Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 00:00:50.989224 containerd[1485]: time="2025-05-14T00:00:50.989050090Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 00:00:50.989400 containerd[1485]: time="2025-05-14T00:00:50.989312570Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 00:00:50.989400 containerd[1485]: time="2025-05-14T00:00:50.989335330Z" level=info msg="Start snapshots syncer" May 14 00:00:50.989400 containerd[1485]: time="2025-05-14T00:00:50.989357370Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 00:00:50.989643 containerd[1485]: time="2025-05-14T00:00:50.989605290Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 00:00:50.989860 containerd[1485]: time="2025-05-14T00:00:50.989741210Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 00:00:50.989964 containerd[1485]: time="2025-05-14T00:00:50.989941490Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 00:00:50.990114 containerd[1485]: time="2025-05-14T00:00:50.990092450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 00:00:50.990163 containerd[1485]: time="2025-05-14T00:00:50.990124370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 00:00:50.990163 containerd[1485]: time="2025-05-14T00:00:50.990140690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 00:00:50.990163 containerd[1485]: time="2025-05-14T00:00:50.990151690Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 00:00:50.990221 containerd[1485]: time="2025-05-14T00:00:50.990164530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 00:00:50.990221 containerd[1485]: time="2025-05-14T00:00:50.990175170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 00:00:50.990221 containerd[1485]: time="2025-05-14T00:00:50.990185610Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 00:00:50.990221 containerd[1485]: time="2025-05-14T00:00:50.990216970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 00:00:50.990293 containerd[1485]: time="2025-05-14T00:00:50.990230490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 00:00:50.990293 containerd[1485]: time="2025-05-14T00:00:50.990241010Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 00:00:50.990293 containerd[1485]: time="2025-05-14T00:00:50.990280090Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:00:50.990340 containerd[1485]: time="2025-05-14T00:00:50.990295730Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:00:50.990668 containerd[1485]: time="2025-05-14T00:00:50.990626210Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:00:50.990720 containerd[1485]: time="2025-05-14T00:00:50.990680010Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:00:50.990720 containerd[1485]: time="2025-05-14T00:00:50.990693650Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 00:00:50.990720 containerd[1485]: time="2025-05-14T00:00:50.990710090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 00:00:50.990779 containerd[1485]: time="2025-05-14T00:00:50.990727170Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 00:00:50.990855 containerd[1485]: time="2025-05-14T00:00:50.990827170Z" level=info msg="runtime interface created" May 14 00:00:50.990855 containerd[1485]: time="2025-05-14T00:00:50.990840810Z" level=info msg="created NRI interface" May 14 00:00:50.990855 containerd[1485]: time="2025-05-14T00:00:50.990851450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 00:00:50.991030 containerd[1485]: time="2025-05-14T00:00:50.990867490Z" level=info msg="Connect containerd service" May 14 00:00:50.991030 containerd[1485]: time="2025-05-14T00:00:50.990907530Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 00:00:50.991849 containerd[1485]: time="2025-05-14T00:00:50.991817210Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:00:51.064009 tar[1477]: linux-arm64/LICENSE May 14 00:00:51.064009 tar[1477]: linux-arm64/README.md May 14 00:00:51.080168 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 00:00:51.104364 containerd[1485]: time="2025-05-14T00:00:51.104280850Z" level=info msg="Start subscribing containerd event" May 14 00:00:51.104364 containerd[1485]: time="2025-05-14T00:00:51.104371010Z" level=info msg="Start recovering state" May 14 00:00:51.104484 containerd[1485]: time="2025-05-14T00:00:51.104467010Z" level=info msg="Start event monitor" May 14 00:00:51.104503 containerd[1485]: time="2025-05-14T00:00:51.104483010Z" level=info msg="Start cni network conf syncer for default" May 14 00:00:51.104503 containerd[1485]: time="2025-05-14T00:00:51.104492770Z" level=info msg="Start streaming server" May 14 00:00:51.104503 containerd[1485]: time="2025-05-14T00:00:51.104501530Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 00:00:51.104590 containerd[1485]: time="2025-05-14T00:00:51.104509570Z" level=info msg="runtime interface starting up..." May 14 00:00:51.104590 containerd[1485]: time="2025-05-14T00:00:51.104515890Z" level=info msg="starting plugins..." May 14 00:00:51.104590 containerd[1485]: time="2025-05-14T00:00:51.104538490Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 00:00:51.105277 containerd[1485]: time="2025-05-14T00:00:51.105237970Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:00:51.105328 containerd[1485]: time="2025-05-14T00:00:51.105307370Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:00:51.107738 containerd[1485]: time="2025-05-14T00:00:51.107712130Z" level=info msg="containerd successfully booted in 0.187636s" May 14 00:00:51.107804 systemd[1]: Started containerd.service - containerd container runtime. May 14 00:00:51.742791 systemd-networkd[1398]: eth0: Gained IPv6LL May 14 00:00:51.747687 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 00:00:51.749289 systemd[1]: Reached target network-online.target - Network is Online. May 14 00:00:51.752407 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 00:00:51.754683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:51.768902 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 00:00:51.791163 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 00:00:51.791354 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 00:00:51.795048 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 00:00:51.798628 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 00:00:51.911886 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:00:51.930495 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 00:00:51.935335 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 00:00:51.959203 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:00:51.959409 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 00:00:51.962074 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 00:00:51.980745 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 00:00:51.983602 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 00:00:51.985824 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 00:00:51.987193 systemd[1]: Reached target getty.target - Login Prompts. May 14 00:00:52.306950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:52.308468 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 00:00:52.310946 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:00:52.312746 systemd[1]: Startup finished in 560ms (kernel) + 4.663s (initrd) + 3.569s (userspace) = 8.793s. May 14 00:00:52.818355 kubelet[1583]: E0514 00:00:52.818270 1583 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:00:52.820579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:00:52.820748 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:00:52.821806 systemd[1]: kubelet.service: Consumed 823ms CPU time, 242.4M memory peak. May 14 00:00:56.860412 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 00:00:56.861574 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:45370.service - OpenSSH per-connection server daemon (10.0.0.1:45370). May 14 00:00:56.937488 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 45370 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:00:56.939362 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:56.945188 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 00:00:56.946279 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 00:00:56.951270 systemd-logind[1466]: New session 1 of user core. May 14 00:00:56.968684 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 00:00:56.971053 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 00:00:56.986730 (systemd)[1601]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:00:56.988962 systemd-logind[1466]: New session c1 of user core. May 14 00:00:57.101318 systemd[1601]: Queued start job for default target default.target. May 14 00:00:57.114620 systemd[1601]: Created slice app.slice - User Application Slice. May 14 00:00:57.114669 systemd[1601]: Reached target paths.target - Paths. May 14 00:00:57.114711 systemd[1601]: Reached target timers.target - Timers. May 14 00:00:57.116014 systemd[1601]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 00:00:57.124977 systemd[1601]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 00:00:57.125047 systemd[1601]: Reached target sockets.target - Sockets. May 14 00:00:57.125086 systemd[1601]: Reached target basic.target - Basic System. May 14 00:00:57.125116 systemd[1601]: Reached target default.target - Main User Target. May 14 00:00:57.125142 systemd[1601]: Startup finished in 130ms. May 14 00:00:57.125285 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 00:00:57.133823 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 00:00:57.198263 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:45374.service - OpenSSH per-connection server daemon (10.0.0.1:45374). May 14 00:00:57.239611 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 45374 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:00:57.240780 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:57.244996 systemd-logind[1466]: New session 2 of user core. May 14 00:00:57.253794 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 00:00:57.305243 sshd[1614]: Connection closed by 10.0.0.1 port 45374 May 14 00:00:57.305693 sshd-session[1612]: pam_unix(sshd:session): session closed for user core May 14 00:00:57.323005 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:45374.service: Deactivated successfully. May 14 00:00:57.324463 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:00:57.325123 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. May 14 00:00:57.327852 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:45382.service - OpenSSH per-connection server daemon (10.0.0.1:45382). May 14 00:00:57.329174 systemd-logind[1466]: Removed session 2. May 14 00:00:57.378812 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 45382 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:00:57.380132 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:57.385679 systemd-logind[1466]: New session 3 of user core. May 14 00:00:57.392805 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 00:00:57.441990 sshd[1622]: Connection closed by 10.0.0.1 port 45382 May 14 00:00:57.442404 sshd-session[1619]: pam_unix(sshd:session): session closed for user core May 14 00:00:57.460826 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:45382.service: Deactivated successfully. May 14 00:00:57.462394 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:00:57.463731 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. May 14 00:00:57.465259 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:45398.service - OpenSSH per-connection server daemon (10.0.0.1:45398). May 14 00:00:57.465996 systemd-logind[1466]: Removed session 3. May 14 00:00:57.514850 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 45398 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:00:57.516052 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:57.519924 systemd-logind[1466]: New session 4 of user core. May 14 00:00:57.532867 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 00:00:57.585375 sshd[1630]: Connection closed by 10.0.0.1 port 45398 May 14 00:00:57.585708 sshd-session[1627]: pam_unix(sshd:session): session closed for user core May 14 00:00:57.599327 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:45398.service: Deactivated successfully. May 14 00:00:57.600920 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:00:57.602771 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. May 14 00:00:57.604413 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:45404.service - OpenSSH per-connection server daemon (10.0.0.1:45404). May 14 00:00:57.605389 systemd-logind[1466]: Removed session 4. May 14 00:00:57.660170 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 45404 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:00:57.661568 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:57.665900 systemd-logind[1466]: New session 5 of user core. May 14 00:00:57.672820 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 00:00:57.732669 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 00:00:57.732957 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:00:57.746583 sudo[1639]: pam_unix(sudo:session): session closed for user root May 14 00:00:57.748831 sshd[1638]: Connection closed by 10.0.0.1 port 45404 May 14 00:00:57.748612 sshd-session[1635]: pam_unix(sshd:session): session closed for user core May 14 00:00:57.762499 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:45404.service: Deactivated successfully. May 14 00:00:57.763962 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:00:57.764673 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. May 14 00:00:57.766422 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:45420.service - OpenSSH per-connection server daemon (10.0.0.1:45420). May 14 00:00:57.767174 systemd-logind[1466]: Removed session 5. May 14 00:00:57.807487 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 45420 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:00:57.808892 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:57.813369 systemd-logind[1466]: New session 6 of user core. May 14 00:00:57.825804 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 00:00:57.877110 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 00:00:57.877387 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:00:57.880574 sudo[1649]: pam_unix(sudo:session): session closed for user root May 14 00:00:57.885418 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 00:00:57.885732 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:00:57.893931 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:00:57.929851 augenrules[1671]: No rules May 14 00:00:57.931525 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:00:57.932729 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:00:57.933807 sudo[1648]: pam_unix(sudo:session): session closed for user root May 14 00:00:57.935500 sshd[1647]: Connection closed by 10.0.0.1 port 45420 May 14 00:00:57.935410 sshd-session[1644]: pam_unix(sshd:session): session closed for user core May 14 00:00:57.945872 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:45420.service: Deactivated successfully. May 14 00:00:57.947510 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:00:57.949875 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. May 14 00:00:57.951136 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). May 14 00:00:57.951931 systemd-logind[1466]: Removed session 6. May 14 00:00:58.000059 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:00:58.001256 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:58.005541 systemd-logind[1466]: New session 7 of user core. May 14 00:00:58.016825 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 00:00:58.067860 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:00:58.068453 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:00:58.406678 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 00:00:58.421964 (dockerd)[1705]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:00:58.670182 dockerd[1705]: time="2025-05-14T00:00:58.670060490Z" level=info msg="Starting up" May 14 00:00:58.671469 dockerd[1705]: time="2025-05-14T00:00:58.671358810Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 00:00:58.766991 dockerd[1705]: time="2025-05-14T00:00:58.766955290Z" level=info msg="Loading containers: start." May 14 00:00:58.909696 kernel: Initializing XFRM netlink socket May 14 00:00:58.971313 systemd-networkd[1398]: docker0: Link UP May 14 00:00:59.131003 dockerd[1705]: time="2025-05-14T00:00:59.130900330Z" level=info msg="Loading containers: done." May 14 00:00:59.142977 dockerd[1705]: time="2025-05-14T00:00:59.142930250Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:00:59.143122 dockerd[1705]: time="2025-05-14T00:00:59.143020090Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 00:00:59.143208 dockerd[1705]: time="2025-05-14T00:00:59.143189850Z" level=info msg="Daemon has completed initialization" May 14 00:00:59.173863 dockerd[1705]: time="2025-05-14T00:00:59.173794810Z" level=info msg="API listen on /run/docker.sock" May 14 00:00:59.173982 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:00:59.811929 containerd[1485]: time="2025-05-14T00:00:59.811887410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 00:01:00.367030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278650029.mount: Deactivated successfully. May 14 00:01:01.265235 containerd[1485]: time="2025-05-14T00:01:01.265131570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:01.266046 containerd[1485]: time="2025-05-14T00:01:01.265828170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 14 00:01:01.266768 containerd[1485]: time="2025-05-14T00:01:01.266705530Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:01.269696 containerd[1485]: time="2025-05-14T00:01:01.269664210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:01.270938 containerd[1485]: time="2025-05-14T00:01:01.270700170Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.45876728s" May 14 00:01:01.270938 containerd[1485]: time="2025-05-14T00:01:01.270751050Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 00:01:01.285912 containerd[1485]: time="2025-05-14T00:01:01.285879690Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 00:01:02.492471 containerd[1485]: time="2025-05-14T00:01:02.492412290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:02.493425 containerd[1485]: time="2025-05-14T00:01:02.493236370Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 14 00:01:02.494509 containerd[1485]: time="2025-05-14T00:01:02.494075810Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:02.496634 containerd[1485]: time="2025-05-14T00:01:02.496596330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:02.497732 containerd[1485]: time="2025-05-14T00:01:02.497694250Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.211776s" May 14 00:01:02.497732 containerd[1485]: time="2025-05-14T00:01:02.497729490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 00:01:02.515238 containerd[1485]: time="2025-05-14T00:01:02.515193050Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 00:01:03.071119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:01:03.074829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:03.185945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:03.190049 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:01:03.248467 kubelet[2005]: E0514 00:01:03.248393 2005 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:01:03.251678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:01:03.251826 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:01:03.252569 systemd[1]: kubelet.service: Consumed 141ms CPU time, 97M memory peak. May 14 00:01:03.436879 containerd[1485]: time="2025-05-14T00:01:03.436767730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:03.437742 containerd[1485]: time="2025-05-14T00:01:03.437680490Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 14 00:01:03.438433 containerd[1485]: time="2025-05-14T00:01:03.438388410Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:03.440802 containerd[1485]: time="2025-05-14T00:01:03.440766810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:03.442550 containerd[1485]: time="2025-05-14T00:01:03.442510970Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 927.28016ms" May 14 00:01:03.442550 containerd[1485]: time="2025-05-14T00:01:03.442545250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 00:01:03.457600 containerd[1485]: time="2025-05-14T00:01:03.457561410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 00:01:04.332507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785172507.mount: Deactivated successfully. May 14 00:01:04.679795 containerd[1485]: time="2025-05-14T00:01:04.679678330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:04.680754 containerd[1485]: time="2025-05-14T00:01:04.680564210Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 14 00:01:04.681470 containerd[1485]: time="2025-05-14T00:01:04.681408170Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:04.683392 containerd[1485]: time="2025-05-14T00:01:04.683325890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:04.684035 containerd[1485]: time="2025-05-14T00:01:04.683823610Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.22622684s" May 14 00:01:04.684035 containerd[1485]: time="2025-05-14T00:01:04.683857050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 00:01:04.698561 containerd[1485]: time="2025-05-14T00:01:04.698529250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:01:05.166263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770351968.mount: Deactivated successfully. May 14 00:01:05.670375 containerd[1485]: time="2025-05-14T00:01:05.670212130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:05.670994 containerd[1485]: time="2025-05-14T00:01:05.670756770Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 00:01:05.671597 containerd[1485]: time="2025-05-14T00:01:05.671564250Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:05.674268 containerd[1485]: time="2025-05-14T00:01:05.674234650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:05.675411 containerd[1485]: time="2025-05-14T00:01:05.675366970Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 976.80096ms" May 14 00:01:05.675483 containerd[1485]: time="2025-05-14T00:01:05.675416610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 00:01:05.691735 containerd[1485]: time="2025-05-14T00:01:05.691586330Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 00:01:06.167624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505061665.mount: Deactivated successfully. May 14 00:01:06.173386 containerd[1485]: time="2025-05-14T00:01:06.173340330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:06.174630 containerd[1485]: time="2025-05-14T00:01:06.174581130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 14 00:01:06.175481 containerd[1485]: time="2025-05-14T00:01:06.175446810Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:06.177407 containerd[1485]: time="2025-05-14T00:01:06.177345250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:06.178052 containerd[1485]: time="2025-05-14T00:01:06.178018770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 486.38356ms" May 14 00:01:06.178110 containerd[1485]: time="2025-05-14T00:01:06.178052730Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 00:01:06.193410 containerd[1485]: time="2025-05-14T00:01:06.193374490Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 00:01:06.655855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388476149.mount: Deactivated successfully. May 14 00:01:07.933058 containerd[1485]: time="2025-05-14T00:01:07.933007650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.934041 containerd[1485]: time="2025-05-14T00:01:07.933996090Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 14 00:01:07.934786 containerd[1485]: time="2025-05-14T00:01:07.934723770Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.938721 containerd[1485]: time="2025-05-14T00:01:07.937938130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.938811 containerd[1485]: time="2025-05-14T00:01:07.938728930Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.74528652s" May 14 00:01:07.938811 containerd[1485]: time="2025-05-14T00:01:07.938768330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 00:01:12.947429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:12.947983 systemd[1]: kubelet.service: Consumed 141ms CPU time, 97M memory peak. May 14 00:01:12.949945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:12.977719 systemd[1]: Reload requested from client PID 2252 ('systemctl') (unit session-7.scope)... May 14 00:01:12.977738 systemd[1]: Reloading... May 14 00:01:13.044920 zram_generator::config[2297]: No configuration found. May 14 00:01:13.130237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:01:13.203291 systemd[1]: Reloading finished in 225 ms. May 14 00:01:13.248704 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:13.250375 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:01:13.250576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:13.250622 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.4M memory peak. May 14 00:01:13.252075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:13.353887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:13.357861 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:01:13.397289 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:01:13.397741 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:01:13.397741 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:01:13.398666 kubelet[2343]: I0514 00:01:13.398398 2343 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:01:14.575687 kubelet[2343]: I0514 00:01:14.575274 2343 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:01:14.575687 kubelet[2343]: I0514 00:01:14.575307 2343 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:01:14.575687 kubelet[2343]: I0514 00:01:14.575517 2343 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:01:14.621024 kubelet[2343]: E0514 00:01:14.620980 2343 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.621179 kubelet[2343]: I0514 00:01:14.621134 2343 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:01:14.631667 kubelet[2343]: I0514 00:01:14.631561 2343 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:01:14.632784 kubelet[2343]: I0514 00:01:14.632734 2343 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:01:14.632957 kubelet[2343]: I0514 00:01:14.632785 2343 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:01:14.633042 kubelet[2343]: I0514 00:01:14.633011 2343 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:01:14.633042 kubelet[2343]: I0514 00:01:14.633020 2343 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:01:14.633294 kubelet[2343]: I0514 00:01:14.633264 2343 state_mem.go:36] "Initialized new in-memory state store" May 14 00:01:14.634253 kubelet[2343]: I0514 00:01:14.634231 2343 kubelet.go:400] "Attempting to sync node with API server" May 14 00:01:14.634279 kubelet[2343]: I0514 00:01:14.634253 2343 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:01:14.634566 kubelet[2343]: I0514 00:01:14.634554 2343 kubelet.go:312] "Adding apiserver pod source" May 14 00:01:14.634730 kubelet[2343]: I0514 00:01:14.634629 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:01:14.635680 kubelet[2343]: W0514 00:01:14.635613 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.635755 kubelet[2343]: E0514 00:01:14.635698 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.635784 kubelet[2343]: W0514 00:01:14.635754 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.635784 kubelet[2343]: E0514 00:01:14.635782 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.637764 kubelet[2343]: I0514 00:01:14.637704 2343 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:01:14.638207 kubelet[2343]: I0514 00:01:14.638174 2343 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:01:14.638256 kubelet[2343]: W0514 00:01:14.638243 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:01:14.639170 kubelet[2343]: I0514 00:01:14.639155 2343 server.go:1264] "Started kubelet" May 14 00:01:14.640662 kubelet[2343]: I0514 00:01:14.639548 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:01:14.640662 kubelet[2343]: I0514 00:01:14.639878 2343 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:01:14.640662 kubelet[2343]: I0514 00:01:14.639915 2343 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:01:14.640662 kubelet[2343]: I0514 00:01:14.640597 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:01:14.642688 kubelet[2343]: I0514 00:01:14.642665 2343 server.go:455] "Adding debug handlers to kubelet server" May 14 00:01:14.643958 kubelet[2343]: E0514 00:01:14.643786 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3bbf9f38d852 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:01:14.63913685 +0000 UTC m=+1.278092681,LastTimestamp:2025-05-14 00:01:14.63913685 +0000 UTC m=+1.278092681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:01:14.644054 kubelet[2343]: I0514 00:01:14.643963 2343 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:01:14.644054 kubelet[2343]: I0514 00:01:14.644030 2343 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:01:14.644234 kubelet[2343]: W0514 00:01:14.644176 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.644234 kubelet[2343]: E0514 00:01:14.644222 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.644872 kubelet[2343]: I0514 00:01:14.644854 2343 reconciler.go:26] "Reconciler: start to sync state" May 14 00:01:14.645211 kubelet[2343]: E0514 00:01:14.645180 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" May 14 00:01:14.645633 kubelet[2343]: I0514 00:01:14.645572 2343 factory.go:221] Registration of the systemd container factory successfully May 14 00:01:14.645735 kubelet[2343]: I0514 00:01:14.645711 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:01:14.646492 kubelet[2343]: I0514 00:01:14.646423 2343 factory.go:221] Registration of the containerd container factory successfully May 14 00:01:14.653942 kubelet[2343]: I0514 00:01:14.653891 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:01:14.655192 kubelet[2343]: I0514 00:01:14.655169 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:01:14.655243 kubelet[2343]: I0514 00:01:14.655202 2343 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:01:14.655243 kubelet[2343]: I0514 00:01:14.655224 2343 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:01:14.655297 kubelet[2343]: E0514 00:01:14.655265 2343 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:01:14.659788 kubelet[2343]: W0514 00:01:14.659736 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.659788 kubelet[2343]: E0514 00:01:14.659792 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:14.660325 kubelet[2343]: I0514 00:01:14.660292 2343 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:01:14.660325 kubelet[2343]: I0514 00:01:14.660312 2343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:01:14.660385 kubelet[2343]: I0514 00:01:14.660328 2343 state_mem.go:36] "Initialized new in-memory state store" May 14 00:01:14.725806 kubelet[2343]: I0514 00:01:14.725768 2343 policy_none.go:49] "None policy: Start" May 14 00:01:14.726657 kubelet[2343]: I0514 00:01:14.726631 2343 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:01:14.726734 kubelet[2343]: I0514 00:01:14.726676 2343 state_mem.go:35] "Initializing new in-memory state store" May 14 00:01:14.735316 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:01:14.745861 kubelet[2343]: I0514 00:01:14.745824 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:01:14.746165 kubelet[2343]: E0514 00:01:14.746138 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 14 00:01:14.748599 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:01:14.751211 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:01:14.755927 kubelet[2343]: E0514 00:01:14.755896 2343 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:01:14.758402 kubelet[2343]: I0514 00:01:14.758380 2343 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:01:14.759108 kubelet[2343]: I0514 00:01:14.758682 2343 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:01:14.759108 kubelet[2343]: I0514 00:01:14.758806 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:01:14.760377 kubelet[2343]: E0514 00:01:14.760354 2343 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:01:14.846271 kubelet[2343]: E0514 00:01:14.846158 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" May 14 00:01:14.947842 kubelet[2343]: I0514 00:01:14.947795 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:01:14.948151 kubelet[2343]: E0514 00:01:14.948102 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 14 00:01:14.956363 kubelet[2343]: I0514 00:01:14.956319 2343 topology_manager.go:215] "Topology Admit Handler" podUID="1ed316db6318bb74c54874922274baf9" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:01:14.957343 kubelet[2343]: I0514 00:01:14.957308 2343 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:01:14.958033 kubelet[2343]: I0514 00:01:14.958006 2343 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:01:14.964058 systemd[1]: Created slice kubepods-burstable-pod1ed316db6318bb74c54874922274baf9.slice - libcontainer container kubepods-burstable-pod1ed316db6318bb74c54874922274baf9.slice. May 14 00:01:14.976518 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 00:01:14.989435 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 00:01:15.047252 kubelet[2343]: I0514 00:01:15.047197 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:15.047252 kubelet[2343]: I0514 00:01:15.047238 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:15.047252 kubelet[2343]: I0514 00:01:15.047257 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:15.047422 kubelet[2343]: I0514 00:01:15.047273 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ed316db6318bb74c54874922274baf9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ed316db6318bb74c54874922274baf9\") " pod="kube-system/kube-apiserver-localhost" May 14 00:01:15.047422 kubelet[2343]: I0514 00:01:15.047296 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ed316db6318bb74c54874922274baf9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ed316db6318bb74c54874922274baf9\") " pod="kube-system/kube-apiserver-localhost" May 14 00:01:15.047422 kubelet[2343]: I0514 00:01:15.047362 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ed316db6318bb74c54874922274baf9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ed316db6318bb74c54874922274baf9\") " pod="kube-system/kube-apiserver-localhost" May 14 00:01:15.047422 kubelet[2343]: I0514 00:01:15.047414 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:15.047504 kubelet[2343]: I0514 00:01:15.047435 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:15.047504 kubelet[2343]: I0514 00:01:15.047451 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:01:15.247609 kubelet[2343]: E0514 00:01:15.247463 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" May 14 00:01:15.275499 containerd[1485]: time="2025-05-14T00:01:15.275442970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ed316db6318bb74c54874922274baf9,Namespace:kube-system,Attempt:0,}" May 14 00:01:15.288071 containerd[1485]: time="2025-05-14T00:01:15.288028690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 00:01:15.291702 containerd[1485]: time="2025-05-14T00:01:15.291668370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 00:01:15.350305 kubelet[2343]: I0514 00:01:15.350000 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:01:15.350305 kubelet[2343]: E0514 00:01:15.350271 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 14 00:01:15.724123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435471376.mount: Deactivated successfully. May 14 00:01:15.730258 containerd[1485]: time="2025-05-14T00:01:15.730195410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:01:15.731826 containerd[1485]: time="2025-05-14T00:01:15.731769770Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:01:15.733531 containerd[1485]: time="2025-05-14T00:01:15.733480130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 00:01:15.734283 containerd[1485]: time="2025-05-14T00:01:15.734233130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:01:15.735697 containerd[1485]: time="2025-05-14T00:01:15.735651250Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:01:15.736855 containerd[1485]: time="2025-05-14T00:01:15.736788330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:01:15.740752 containerd[1485]: time="2025-05-14T00:01:15.739760450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:01:15.740846 containerd[1485]: time="2025-05-14T00:01:15.740741810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 451.1832ms" May 14 00:01:15.740955 containerd[1485]: time="2025-05-14T00:01:15.740910690Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 447.72172ms" May 14 00:01:15.741569 containerd[1485]: time="2025-05-14T00:01:15.741362690Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 463.77168ms" May 14 00:01:15.741820 containerd[1485]: time="2025-05-14T00:01:15.741794290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:01:15.767987 containerd[1485]: time="2025-05-14T00:01:15.767687010Z" level=info msg="connecting to shim 6c1f1111b88537a1eef4d2e43c9049630ea289a4d70df8f0b862e108c45d177c" address="unix:///run/containerd/s/bc75844b59cdce1a24f0d165ae9c0c0958bcf61e3c6fd7fcf8763873d53d7101" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:15.768797 containerd[1485]: time="2025-05-14T00:01:15.768759970Z" level=info msg="connecting to shim 406fe7d678e447b9e5277c3adfaced1e7592da88d974789dacd475d411432590" address="unix:///run/containerd/s/d2e91d1cc8276155124234acd3242e0cee1f7172b214f1723277d2c0aa6d1888" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:15.769313 containerd[1485]: time="2025-05-14T00:01:15.769286330Z" level=info msg="connecting to shim 74cbe08b2155cf7a04fa84a0096ff02a3790fee0977f90b82e8ee5d1704df552" address="unix:///run/containerd/s/9da028ebfd5701cf45d9db988615cf73a0b85a6bcd6c358fb297fdc7bffd7a08" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:15.793841 systemd[1]: Started cri-containerd-6c1f1111b88537a1eef4d2e43c9049630ea289a4d70df8f0b862e108c45d177c.scope - libcontainer container 6c1f1111b88537a1eef4d2e43c9049630ea289a4d70df8f0b862e108c45d177c. May 14 00:01:15.794991 systemd[1]: Started cri-containerd-74cbe08b2155cf7a04fa84a0096ff02a3790fee0977f90b82e8ee5d1704df552.scope - libcontainer container 74cbe08b2155cf7a04fa84a0096ff02a3790fee0977f90b82e8ee5d1704df552. May 14 00:01:15.797517 systemd[1]: Started cri-containerd-406fe7d678e447b9e5277c3adfaced1e7592da88d974789dacd475d411432590.scope - libcontainer container 406fe7d678e447b9e5277c3adfaced1e7592da88d974789dacd475d411432590. May 14 00:01:15.830614 containerd[1485]: time="2025-05-14T00:01:15.830278690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1f1111b88537a1eef4d2e43c9049630ea289a4d70df8f0b862e108c45d177c\"" May 14 00:01:15.835376 containerd[1485]: time="2025-05-14T00:01:15.835336490Z" level=info msg="CreateContainer within sandbox \"6c1f1111b88537a1eef4d2e43c9049630ea289a4d70df8f0b862e108c45d177c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:01:15.836342 containerd[1485]: time="2025-05-14T00:01:15.836263210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"406fe7d678e447b9e5277c3adfaced1e7592da88d974789dacd475d411432590\"" May 14 00:01:15.839364 containerd[1485]: time="2025-05-14T00:01:15.839307810Z" level=info msg="CreateContainer within sandbox \"406fe7d678e447b9e5277c3adfaced1e7592da88d974789dacd475d411432590\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:01:15.840093 containerd[1485]: time="2025-05-14T00:01:15.840063770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ed316db6318bb74c54874922274baf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"74cbe08b2155cf7a04fa84a0096ff02a3790fee0977f90b82e8ee5d1704df552\"" May 14 00:01:15.842597 containerd[1485]: time="2025-05-14T00:01:15.842555210Z" level=info msg="Container a3c0c053fca8f8871365153cbfbf449935d7b5e57183965f66fc99aa061cc160: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:15.842932 containerd[1485]: time="2025-05-14T00:01:15.842792250Z" level=info msg="CreateContainer within sandbox \"74cbe08b2155cf7a04fa84a0096ff02a3790fee0977f90b82e8ee5d1704df552\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:01:15.846803 containerd[1485]: time="2025-05-14T00:01:15.846758930Z" level=info msg="Container d15ee961631c85e59c391c8a6a549bd08ea587dc983f273597e24f12c4a0114a: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:15.852337 containerd[1485]: time="2025-05-14T00:01:15.852285850Z" level=info msg="CreateContainer within sandbox \"6c1f1111b88537a1eef4d2e43c9049630ea289a4d70df8f0b862e108c45d177c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a3c0c053fca8f8871365153cbfbf449935d7b5e57183965f66fc99aa061cc160\"" May 14 00:01:15.853105 containerd[1485]: time="2025-05-14T00:01:15.853055210Z" level=info msg="StartContainer for \"a3c0c053fca8f8871365153cbfbf449935d7b5e57183965f66fc99aa061cc160\"" May 14 00:01:15.854242 containerd[1485]: time="2025-05-14T00:01:15.854211410Z" level=info msg="Container 0a6917c664f5511e721b017c70c39b562ec0d6613ee4953880c006c664888d71: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:15.854336 containerd[1485]: time="2025-05-14T00:01:15.854266010Z" level=info msg="connecting to shim a3c0c053fca8f8871365153cbfbf449935d7b5e57183965f66fc99aa061cc160" address="unix:///run/containerd/s/bc75844b59cdce1a24f0d165ae9c0c0958bcf61e3c6fd7fcf8763873d53d7101" protocol=ttrpc version=3 May 14 00:01:15.856968 containerd[1485]: time="2025-05-14T00:01:15.856932130Z" level=info msg="CreateContainer within sandbox \"406fe7d678e447b9e5277c3adfaced1e7592da88d974789dacd475d411432590\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d15ee961631c85e59c391c8a6a549bd08ea587dc983f273597e24f12c4a0114a\"" May 14 00:01:15.857450 containerd[1485]: time="2025-05-14T00:01:15.857423610Z" level=info msg="StartContainer for \"d15ee961631c85e59c391c8a6a549bd08ea587dc983f273597e24f12c4a0114a\"" May 14 00:01:15.858753 containerd[1485]: time="2025-05-14T00:01:15.858719850Z" level=info msg="connecting to shim d15ee961631c85e59c391c8a6a549bd08ea587dc983f273597e24f12c4a0114a" address="unix:///run/containerd/s/d2e91d1cc8276155124234acd3242e0cee1f7172b214f1723277d2c0aa6d1888" protocol=ttrpc version=3 May 14 00:01:15.866001 containerd[1485]: time="2025-05-14T00:01:15.865960450Z" level=info msg="CreateContainer within sandbox \"74cbe08b2155cf7a04fa84a0096ff02a3790fee0977f90b82e8ee5d1704df552\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a6917c664f5511e721b017c70c39b562ec0d6613ee4953880c006c664888d71\"" May 14 00:01:15.866566 containerd[1485]: time="2025-05-14T00:01:15.866453530Z" level=info msg="StartContainer for \"0a6917c664f5511e721b017c70c39b562ec0d6613ee4953880c006c664888d71\"" May 14 00:01:15.867736 containerd[1485]: time="2025-05-14T00:01:15.867700530Z" level=info msg="connecting to shim 0a6917c664f5511e721b017c70c39b562ec0d6613ee4953880c006c664888d71" address="unix:///run/containerd/s/9da028ebfd5701cf45d9db988615cf73a0b85a6bcd6c358fb297fdc7bffd7a08" protocol=ttrpc version=3 May 14 00:01:15.874955 systemd[1]: Started cri-containerd-a3c0c053fca8f8871365153cbfbf449935d7b5e57183965f66fc99aa061cc160.scope - libcontainer container a3c0c053fca8f8871365153cbfbf449935d7b5e57183965f66fc99aa061cc160. May 14 00:01:15.878021 systemd[1]: Started cri-containerd-d15ee961631c85e59c391c8a6a549bd08ea587dc983f273597e24f12c4a0114a.scope - libcontainer container d15ee961631c85e59c391c8a6a549bd08ea587dc983f273597e24f12c4a0114a. May 14 00:01:15.886749 systemd[1]: Started cri-containerd-0a6917c664f5511e721b017c70c39b562ec0d6613ee4953880c006c664888d71.scope - libcontainer container 0a6917c664f5511e721b017c70c39b562ec0d6613ee4953880c006c664888d71. May 14 00:01:15.894904 kubelet[2343]: W0514 00:01:15.894819 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:15.894904 kubelet[2343]: E0514 00:01:15.894881 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:15.922783 containerd[1485]: time="2025-05-14T00:01:15.922359850Z" level=info msg="StartContainer for \"a3c0c053fca8f8871365153cbfbf449935d7b5e57183965f66fc99aa061cc160\" returns successfully" May 14 00:01:15.936579 containerd[1485]: time="2025-05-14T00:01:15.934957250Z" level=info msg="StartContainer for \"d15ee961631c85e59c391c8a6a549bd08ea587dc983f273597e24f12c4a0114a\" returns successfully" May 14 00:01:15.937759 kubelet[2343]: W0514 00:01:15.937106 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:15.937759 kubelet[2343]: E0514 00:01:15.937168 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 14 00:01:15.945686 containerd[1485]: time="2025-05-14T00:01:15.944105370Z" level=info msg="StartContainer for \"0a6917c664f5511e721b017c70c39b562ec0d6613ee4953880c006c664888d71\" returns successfully" May 14 00:01:16.048206 kubelet[2343]: E0514 00:01:16.048090 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="1.6s" May 14 00:01:16.155377 kubelet[2343]: I0514 00:01:16.154489 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:01:17.633093 kubelet[2343]: I0514 00:01:17.633006 2343 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:01:17.636697 kubelet[2343]: I0514 00:01:17.636640 2343 apiserver.go:52] "Watching apiserver" May 14 00:01:17.644880 kubelet[2343]: I0514 00:01:17.644832 2343 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:01:17.682154 kubelet[2343]: E0514 00:01:17.682088 2343 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 00:01:19.814453 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-7.scope)... May 14 00:01:19.814468 systemd[1]: Reloading... May 14 00:01:19.889724 zram_generator::config[2665]: No configuration found. May 14 00:01:19.969609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:01:20.055477 systemd[1]: Reloading finished in 240 ms. May 14 00:01:20.074014 kubelet[2343]: I0514 00:01:20.073895 2343 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:01:20.074128 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:20.085890 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:01:20.086101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:20.086143 systemd[1]: kubelet.service: Consumed 1.651s CPU time, 114.8M memory peak. May 14 00:01:20.088704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:20.209841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:20.213814 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:01:20.260860 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:01:20.260860 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:01:20.260860 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:01:20.261203 kubelet[2705]: I0514 00:01:20.260906 2705 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:01:20.265750 kubelet[2705]: I0514 00:01:20.264780 2705 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:01:20.265750 kubelet[2705]: I0514 00:01:20.264803 2705 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:01:20.265750 kubelet[2705]: I0514 00:01:20.264969 2705 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:01:20.268440 kubelet[2705]: I0514 00:01:20.268413 2705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:01:20.269692 kubelet[2705]: I0514 00:01:20.269637 2705 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:01:20.274591 kubelet[2705]: I0514 00:01:20.274542 2705 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:01:20.274818 kubelet[2705]: I0514 00:01:20.274785 2705 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:01:20.274991 kubelet[2705]: I0514 00:01:20.274816 2705 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:01:20.275075 kubelet[2705]: I0514 00:01:20.274991 2705 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:01:20.275075 kubelet[2705]: I0514 00:01:20.275001 2705 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:01:20.275075 kubelet[2705]: I0514 00:01:20.275030 2705 state_mem.go:36] "Initialized new in-memory state store" May 14 00:01:20.275590 kubelet[2705]: I0514 00:01:20.275144 2705 kubelet.go:400] "Attempting to sync node with API server" May 14 00:01:20.275590 kubelet[2705]: I0514 00:01:20.275172 2705 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:01:20.275590 kubelet[2705]: I0514 00:01:20.275217 2705 kubelet.go:312] "Adding apiserver pod source" May 14 00:01:20.275590 kubelet[2705]: I0514 00:01:20.275235 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:01:20.276224 kubelet[2705]: I0514 00:01:20.276206 2705 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:01:20.276554 kubelet[2705]: I0514 00:01:20.276528 2705 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:01:20.276956 kubelet[2705]: I0514 00:01:20.276928 2705 server.go:1264] "Started kubelet" May 14 00:01:20.278118 kubelet[2705]: I0514 00:01:20.278091 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:01:20.278554 kubelet[2705]: I0514 00:01:20.278470 2705 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:01:20.280089 kubelet[2705]: I0514 00:01:20.279950 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:01:20.280170 kubelet[2705]: I0514 00:01:20.280151 2705 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:01:20.284986 kubelet[2705]: I0514 00:01:20.284278 2705 server.go:455] "Adding debug handlers to kubelet server" May 14 00:01:20.287541 kubelet[2705]: I0514 00:01:20.287414 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:01:20.289763 kubelet[2705]: I0514 00:01:20.289735 2705 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:01:20.290454 kubelet[2705]: I0514 00:01:20.290433 2705 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:01:20.290598 kubelet[2705]: I0514 00:01:20.290586 2705 reconciler.go:26] "Reconciler: start to sync state" May 14 00:01:20.292683 kubelet[2705]: I0514 00:01:20.291772 2705 factory.go:221] Registration of the systemd container factory successfully May 14 00:01:20.292683 kubelet[2705]: I0514 00:01:20.291865 2705 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:01:20.292683 kubelet[2705]: I0514 00:01:20.292487 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:01:20.292683 kubelet[2705]: I0514 00:01:20.292515 2705 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:01:20.292683 kubelet[2705]: I0514 00:01:20.292528 2705 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:01:20.292683 kubelet[2705]: E0514 00:01:20.292579 2705 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:01:20.295301 kubelet[2705]: E0514 00:01:20.295205 2705 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:01:20.295301 kubelet[2705]: I0514 00:01:20.295207 2705 factory.go:221] Registration of the containerd container factory successfully May 14 00:01:20.329464 kubelet[2705]: I0514 00:01:20.329364 2705 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:01:20.329464 kubelet[2705]: I0514 00:01:20.329382 2705 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:01:20.329464 kubelet[2705]: I0514 00:01:20.329402 2705 state_mem.go:36] "Initialized new in-memory state store" May 14 00:01:20.329605 kubelet[2705]: I0514 00:01:20.329549 2705 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:01:20.329605 kubelet[2705]: I0514 00:01:20.329561 2705 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:01:20.329605 kubelet[2705]: I0514 00:01:20.329577 2705 policy_none.go:49] "None policy: Start" May 14 00:01:20.330181 kubelet[2705]: I0514 00:01:20.330084 2705 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:01:20.330181 kubelet[2705]: I0514 00:01:20.330105 2705 state_mem.go:35] "Initializing new in-memory state store" May 14 00:01:20.330331 kubelet[2705]: I0514 00:01:20.330230 2705 state_mem.go:75] "Updated machine memory state" May 14 00:01:20.334785 kubelet[2705]: I0514 00:01:20.334671 2705 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:01:20.334854 kubelet[2705]: I0514 00:01:20.334824 2705 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:01:20.334945 kubelet[2705]: I0514 00:01:20.334919 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:01:20.391339 kubelet[2705]: I0514 00:01:20.391297 2705 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:01:20.393457 kubelet[2705]: I0514 00:01:20.393417 2705 topology_manager.go:215] "Topology Admit Handler" podUID="1ed316db6318bb74c54874922274baf9" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:01:20.393558 kubelet[2705]: I0514 00:01:20.393525 2705 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:01:20.393590 kubelet[2705]: I0514 00:01:20.393563 2705 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:01:20.413520 kubelet[2705]: E0514 00:01:20.413471 2705 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:01:20.420198 kubelet[2705]: I0514 00:01:20.420170 2705 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 00:01:20.420327 kubelet[2705]: I0514 00:01:20.420274 2705 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:01:20.490954 kubelet[2705]: I0514 00:01:20.490920 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:20.490954 kubelet[2705]: I0514 00:01:20.490952 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:20.491129 kubelet[2705]: I0514 00:01:20.490973 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:01:20.491129 kubelet[2705]: I0514 00:01:20.490987 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ed316db6318bb74c54874922274baf9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ed316db6318bb74c54874922274baf9\") " pod="kube-system/kube-apiserver-localhost" May 14 00:01:20.491129 kubelet[2705]: I0514 00:01:20.491003 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ed316db6318bb74c54874922274baf9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ed316db6318bb74c54874922274baf9\") " pod="kube-system/kube-apiserver-localhost" May 14 00:01:20.491129 kubelet[2705]: I0514 00:01:20.491016 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ed316db6318bb74c54874922274baf9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ed316db6318bb74c54874922274baf9\") " pod="kube-system/kube-apiserver-localhost" May 14 00:01:20.491129 kubelet[2705]: I0514 00:01:20.491044 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:20.491233 kubelet[2705]: I0514 00:01:20.491058 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:20.491233 kubelet[2705]: I0514 00:01:20.491074 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:01:21.276962 kubelet[2705]: I0514 00:01:21.276919 2705 apiserver.go:52] "Watching apiserver" May 14 00:01:21.291546 kubelet[2705]: I0514 00:01:21.291495 2705 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:01:21.384044 kubelet[2705]: E0514 00:01:21.383685 2705 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:01:21.399686 kubelet[2705]: I0514 00:01:21.399594 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.39957817 podStartE2EDuration="3.39957817s" podCreationTimestamp="2025-05-14 00:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:21.38453949 +0000 UTC m=+1.167848841" watchObservedRunningTime="2025-05-14 00:01:21.39957817 +0000 UTC m=+1.182887521" May 14 00:01:21.400910 kubelet[2705]: I0514 00:01:21.400797 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.40078293 podStartE2EDuration="1.40078293s" podCreationTimestamp="2025-05-14 00:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:21.40011953 +0000 UTC m=+1.183428881" watchObservedRunningTime="2025-05-14 00:01:21.40078293 +0000 UTC m=+1.184092281" May 14 00:01:21.414798 kubelet[2705]: I0514 00:01:21.414663 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.41463593 podStartE2EDuration="1.41463593s" podCreationTimestamp="2025-05-14 00:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:21.41410741 +0000 UTC m=+1.197416761" watchObservedRunningTime="2025-05-14 00:01:21.41463593 +0000 UTC m=+1.197945241" May 14 00:01:25.045217 sudo[1683]: pam_unix(sudo:session): session closed for user root May 14 00:01:25.050420 sshd[1682]: Connection closed by 10.0.0.1 port 45434 May 14 00:01:25.050915 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 14 00:01:25.054607 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:45434.service: Deactivated successfully. May 14 00:01:25.056629 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:01:25.056857 systemd[1]: session-7.scope: Consumed 7.070s CPU time, 239.8M memory peak. May 14 00:01:25.057976 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. May 14 00:01:25.058819 systemd-logind[1466]: Removed session 7. May 14 00:01:34.903877 kubelet[2705]: I0514 00:01:34.903660 2705 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:01:34.912037 containerd[1485]: time="2025-05-14T00:01:34.911995139Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:01:34.912683 kubelet[2705]: I0514 00:01:34.912241 2705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:01:35.527432 update_engine[1468]: I20250514 00:01:35.527346 1468 update_attempter.cc:509] Updating boot flags... May 14 00:01:35.559673 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2800) May 14 00:01:35.590684 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2799) May 14 00:01:35.892452 kubelet[2705]: I0514 00:01:35.892320 2705 topology_manager.go:215] "Topology Admit Handler" podUID="8f31ba62-eee6-4297-a7c6-1f93a7f5866b" podNamespace="kube-system" podName="kube-proxy-s698p" May 14 00:01:35.902024 systemd[1]: Created slice kubepods-besteffort-pod8f31ba62_eee6_4297_a7c6_1f93a7f5866b.slice - libcontainer container kubepods-besteffort-pod8f31ba62_eee6_4297_a7c6_1f93a7f5866b.slice. May 14 00:01:35.998137 kubelet[2705]: I0514 00:01:35.998076 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f31ba62-eee6-4297-a7c6-1f93a7f5866b-xtables-lock\") pod \"kube-proxy-s698p\" (UID: \"8f31ba62-eee6-4297-a7c6-1f93a7f5866b\") " pod="kube-system/kube-proxy-s698p" May 14 00:01:35.998137 kubelet[2705]: I0514 00:01:35.998131 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f31ba62-eee6-4297-a7c6-1f93a7f5866b-kube-proxy\") pod \"kube-proxy-s698p\" (UID: \"8f31ba62-eee6-4297-a7c6-1f93a7f5866b\") " pod="kube-system/kube-proxy-s698p" May 14 00:01:35.998137 kubelet[2705]: I0514 00:01:35.998151 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f31ba62-eee6-4297-a7c6-1f93a7f5866b-lib-modules\") pod \"kube-proxy-s698p\" (UID: \"8f31ba62-eee6-4297-a7c6-1f93a7f5866b\") " pod="kube-system/kube-proxy-s698p" May 14 00:01:35.998562 kubelet[2705]: I0514 00:01:35.998168 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q2wd\" (UniqueName: \"kubernetes.io/projected/8f31ba62-eee6-4297-a7c6-1f93a7f5866b-kube-api-access-8q2wd\") pod \"kube-proxy-s698p\" (UID: \"8f31ba62-eee6-4297-a7c6-1f93a7f5866b\") " pod="kube-system/kube-proxy-s698p" May 14 00:01:36.034229 kubelet[2705]: I0514 00:01:36.034143 2705 topology_manager.go:215] "Topology Admit Handler" podUID="17151207-9a72-4423-9951-b7d2f55ccee7" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-whmjz" May 14 00:01:36.042139 systemd[1]: Created slice kubepods-besteffort-pod17151207_9a72_4423_9951_b7d2f55ccee7.slice - libcontainer container kubepods-besteffort-pod17151207_9a72_4423_9951_b7d2f55ccee7.slice. May 14 00:01:36.199510 kubelet[2705]: I0514 00:01:36.199383 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17151207-9a72-4423-9951-b7d2f55ccee7-var-lib-calico\") pod \"tigera-operator-797db67f8-whmjz\" (UID: \"17151207-9a72-4423-9951-b7d2f55ccee7\") " pod="tigera-operator/tigera-operator-797db67f8-whmjz" May 14 00:01:36.199510 kubelet[2705]: I0514 00:01:36.199436 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hmv\" (UniqueName: \"kubernetes.io/projected/17151207-9a72-4423-9951-b7d2f55ccee7-kube-api-access-v7hmv\") pod \"tigera-operator-797db67f8-whmjz\" (UID: \"17151207-9a72-4423-9951-b7d2f55ccee7\") " pod="tigera-operator/tigera-operator-797db67f8-whmjz" May 14 00:01:36.216267 containerd[1485]: time="2025-05-14T00:01:36.216223466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s698p,Uid:8f31ba62-eee6-4297-a7c6-1f93a7f5866b,Namespace:kube-system,Attempt:0,}" May 14 00:01:36.269223 containerd[1485]: time="2025-05-14T00:01:36.269173554Z" level=info msg="connecting to shim 74ca5d4989d0db2112f7d5f2298c1e300bcef59fcbefd57f93577173303c0cf7" address="unix:///run/containerd/s/88bdfc27bc753f746b90209846db5053dd722e9ee0882b03b2e58b667e805797" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:36.294054 systemd[1]: Started cri-containerd-74ca5d4989d0db2112f7d5f2298c1e300bcef59fcbefd57f93577173303c0cf7.scope - libcontainer container 74ca5d4989d0db2112f7d5f2298c1e300bcef59fcbefd57f93577173303c0cf7. May 14 00:01:36.329761 containerd[1485]: time="2025-05-14T00:01:36.329721883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s698p,Uid:8f31ba62-eee6-4297-a7c6-1f93a7f5866b,Namespace:kube-system,Attempt:0,} returns sandbox id \"74ca5d4989d0db2112f7d5f2298c1e300bcef59fcbefd57f93577173303c0cf7\"" May 14 00:01:36.336998 containerd[1485]: time="2025-05-14T00:01:36.336882322Z" level=info msg="CreateContainer within sandbox \"74ca5d4989d0db2112f7d5f2298c1e300bcef59fcbefd57f93577173303c0cf7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:01:36.345157 containerd[1485]: time="2025-05-14T00:01:36.345113927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-whmjz,Uid:17151207-9a72-4423-9951-b7d2f55ccee7,Namespace:tigera-operator,Attempt:0,}" May 14 00:01:36.360511 containerd[1485]: time="2025-05-14T00:01:36.360426410Z" level=info msg="connecting to shim 22586bd333cb53b252b50118898bc618e5ec3d4b9a22f4361c7f907752401fec" address="unix:///run/containerd/s/8e9bbfa305951cee64fcecbd5fd53eb86cea2d73cc7c2bda3c881b4208d7f260" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:36.367656 containerd[1485]: time="2025-05-14T00:01:36.367588409Z" level=info msg="Container 8f9abe341eb65f5c2887144f476b865474398b14c51a07cf6b9f0fbb74bd1ec8: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:36.375842 containerd[1485]: time="2025-05-14T00:01:36.375786614Z" level=info msg="CreateContainer within sandbox \"74ca5d4989d0db2112f7d5f2298c1e300bcef59fcbefd57f93577173303c0cf7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f9abe341eb65f5c2887144f476b865474398b14c51a07cf6b9f0fbb74bd1ec8\"" May 14 00:01:36.379719 containerd[1485]: time="2025-05-14T00:01:36.379686875Z" level=info msg="StartContainer for \"8f9abe341eb65f5c2887144f476b865474398b14c51a07cf6b9f0fbb74bd1ec8\"" May 14 00:01:36.381104 containerd[1485]: time="2025-05-14T00:01:36.381070483Z" level=info msg="connecting to shim 8f9abe341eb65f5c2887144f476b865474398b14c51a07cf6b9f0fbb74bd1ec8" address="unix:///run/containerd/s/88bdfc27bc753f746b90209846db5053dd722e9ee0882b03b2e58b667e805797" protocol=ttrpc version=3 May 14 00:01:36.395707 systemd[1]: Started cri-containerd-22586bd333cb53b252b50118898bc618e5ec3d4b9a22f4361c7f907752401fec.scope - libcontainer container 22586bd333cb53b252b50118898bc618e5ec3d4b9a22f4361c7f907752401fec. May 14 00:01:36.398728 systemd[1]: Started cri-containerd-8f9abe341eb65f5c2887144f476b865474398b14c51a07cf6b9f0fbb74bd1ec8.scope - libcontainer container 8f9abe341eb65f5c2887144f476b865474398b14c51a07cf6b9f0fbb74bd1ec8. May 14 00:01:36.435478 containerd[1485]: time="2025-05-14T00:01:36.435430938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-whmjz,Uid:17151207-9a72-4423-9951-b7d2f55ccee7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"22586bd333cb53b252b50118898bc618e5ec3d4b9a22f4361c7f907752401fec\"" May 14 00:01:36.445766 containerd[1485]: time="2025-05-14T00:01:36.445730794Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 00:01:36.483179 containerd[1485]: time="2025-05-14T00:01:36.482959357Z" level=info msg="StartContainer for \"8f9abe341eb65f5c2887144f476b865474398b14c51a07cf6b9f0fbb74bd1ec8\" returns successfully" May 14 00:01:37.363136 kubelet[2705]: I0514 00:01:37.363070 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s698p" podStartSLOduration=2.363053544 podStartE2EDuration="2.363053544s" podCreationTimestamp="2025-05-14 00:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:37.362536061 +0000 UTC m=+17.145845412" watchObservedRunningTime="2025-05-14 00:01:37.363053544 +0000 UTC m=+17.146362895" May 14 00:01:37.778053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995243259.mount: Deactivated successfully. May 14 00:01:38.070499 containerd[1485]: time="2025-05-14T00:01:38.070239809Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:38.071268 containerd[1485]: time="2025-05-14T00:01:38.071190213Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 00:01:38.072345 containerd[1485]: time="2025-05-14T00:01:38.072215098Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:38.073908 containerd[1485]: time="2025-05-14T00:01:38.073870106Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:38.074657 containerd[1485]: time="2025-05-14T00:01:38.074607710Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.628838756s" May 14 00:01:38.074657 containerd[1485]: time="2025-05-14T00:01:38.074643470Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 00:01:38.078308 containerd[1485]: time="2025-05-14T00:01:38.078219407Z" level=info msg="CreateContainer within sandbox \"22586bd333cb53b252b50118898bc618e5ec3d4b9a22f4361c7f907752401fec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 00:01:38.086498 containerd[1485]: time="2025-05-14T00:01:38.085822123Z" level=info msg="Container 5ae3e1acb88e82365b058e05fcf98f77d510f9d374a7a49ede88259dde5fd98f: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:38.088371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670266066.mount: Deactivated successfully. May 14 00:01:38.091254 containerd[1485]: time="2025-05-14T00:01:38.091127269Z" level=info msg="CreateContainer within sandbox \"22586bd333cb53b252b50118898bc618e5ec3d4b9a22f4361c7f907752401fec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5ae3e1acb88e82365b058e05fcf98f77d510f9d374a7a49ede88259dde5fd98f\"" May 14 00:01:38.091895 containerd[1485]: time="2025-05-14T00:01:38.091780392Z" level=info msg="StartContainer for \"5ae3e1acb88e82365b058e05fcf98f77d510f9d374a7a49ede88259dde5fd98f\"" May 14 00:01:38.092639 containerd[1485]: time="2025-05-14T00:01:38.092612916Z" level=info msg="connecting to shim 5ae3e1acb88e82365b058e05fcf98f77d510f9d374a7a49ede88259dde5fd98f" address="unix:///run/containerd/s/8e9bbfa305951cee64fcecbd5fd53eb86cea2d73cc7c2bda3c881b4208d7f260" protocol=ttrpc version=3 May 14 00:01:38.110840 systemd[1]: Started cri-containerd-5ae3e1acb88e82365b058e05fcf98f77d510f9d374a7a49ede88259dde5fd98f.scope - libcontainer container 5ae3e1acb88e82365b058e05fcf98f77d510f9d374a7a49ede88259dde5fd98f. May 14 00:01:38.187043 containerd[1485]: time="2025-05-14T00:01:38.186957887Z" level=info msg="StartContainer for \"5ae3e1acb88e82365b058e05fcf98f77d510f9d374a7a49ede88259dde5fd98f\" returns successfully" May 14 00:01:41.597912 kubelet[2705]: I0514 00:01:41.596826 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-whmjz" podStartSLOduration=3.963049313 podStartE2EDuration="5.596807654s" podCreationTimestamp="2025-05-14 00:01:36 +0000 UTC" firstStartedPulling="2025-05-14 00:01:36.4431789 +0000 UTC m=+16.226488211" lastFinishedPulling="2025-05-14 00:01:38.076937201 +0000 UTC m=+17.860246552" observedRunningTime="2025-05-14 00:01:38.367522031 +0000 UTC m=+18.150831342" watchObservedRunningTime="2025-05-14 00:01:41.596807654 +0000 UTC m=+21.380116965" May 14 00:01:41.597912 kubelet[2705]: I0514 00:01:41.597037 2705 topology_manager.go:215] "Topology Admit Handler" podUID="77d54731-0ef9-4d61-bed0-533e6ba78a30" podNamespace="calico-system" podName="calico-typha-64875fc9ff-krlvv" May 14 00:01:41.610546 systemd[1]: Created slice kubepods-besteffort-pod77d54731_0ef9_4d61_bed0_533e6ba78a30.slice - libcontainer container kubepods-besteffort-pod77d54731_0ef9_4d61_bed0_533e6ba78a30.slice. May 14 00:01:41.697612 kubelet[2705]: I0514 00:01:41.697536 2705 topology_manager.go:215] "Topology Admit Handler" podUID="5d0d33c6-7488-4d9d-bfc0-8c97cabca539" podNamespace="calico-system" podName="calico-node-ff6hj" May 14 00:01:41.703356 systemd[1]: Created slice kubepods-besteffort-pod5d0d33c6_7488_4d9d_bfc0_8c97cabca539.slice - libcontainer container kubepods-besteffort-pod5d0d33c6_7488_4d9d_bfc0_8c97cabca539.slice. May 14 00:01:41.737777 kubelet[2705]: I0514 00:01:41.737730 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/77d54731-0ef9-4d61-bed0-533e6ba78a30-typha-certs\") pod \"calico-typha-64875fc9ff-krlvv\" (UID: \"77d54731-0ef9-4d61-bed0-533e6ba78a30\") " pod="calico-system/calico-typha-64875fc9ff-krlvv" May 14 00:01:41.737777 kubelet[2705]: I0514 00:01:41.737778 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc6m7\" (UniqueName: \"kubernetes.io/projected/77d54731-0ef9-4d61-bed0-533e6ba78a30-kube-api-access-dc6m7\") pod \"calico-typha-64875fc9ff-krlvv\" (UID: \"77d54731-0ef9-4d61-bed0-533e6ba78a30\") " pod="calico-system/calico-typha-64875fc9ff-krlvv" May 14 00:01:41.737940 kubelet[2705]: I0514 00:01:41.737801 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77d54731-0ef9-4d61-bed0-533e6ba78a30-tigera-ca-bundle\") pod \"calico-typha-64875fc9ff-krlvv\" (UID: \"77d54731-0ef9-4d61-bed0-533e6ba78a30\") " pod="calico-system/calico-typha-64875fc9ff-krlvv" May 14 00:01:41.778147 kubelet[2705]: I0514 00:01:41.778067 2705 topology_manager.go:215] "Topology Admit Handler" podUID="c8ed3b65-eee6-481b-bcd9-c2f7489b7d71" podNamespace="calico-system" podName="csi-node-driver-c9hj6" May 14 00:01:41.778695 kubelet[2705]: E0514 00:01:41.778376 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9hj6" podUID="c8ed3b65-eee6-481b-bcd9-c2f7489b7d71" May 14 00:01:41.838980 kubelet[2705]: I0514 00:01:41.838896 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-cni-bin-dir\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839128 kubelet[2705]: I0514 00:01:41.839016 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-cni-net-dir\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839128 kubelet[2705]: I0514 00:01:41.839040 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-lib-modules\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839128 kubelet[2705]: I0514 00:01:41.839061 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktglf\" (UniqueName: \"kubernetes.io/projected/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-kube-api-access-ktglf\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839128 kubelet[2705]: I0514 00:01:41.839097 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-tigera-ca-bundle\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839128 kubelet[2705]: I0514 00:01:41.839114 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-node-certs\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839246 kubelet[2705]: I0514 00:01:41.839131 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-xtables-lock\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839246 kubelet[2705]: I0514 00:01:41.839160 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-cni-log-dir\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839246 kubelet[2705]: I0514 00:01:41.839190 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-var-lib-calico\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839246 kubelet[2705]: I0514 00:01:41.839221 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-flexvol-driver-host\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839344 kubelet[2705]: I0514 00:01:41.839314 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-var-run-calico\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.839344 kubelet[2705]: I0514 00:01:41.839334 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5d0d33c6-7488-4d9d-bfc0-8c97cabca539-policysync\") pod \"calico-node-ff6hj\" (UID: \"5d0d33c6-7488-4d9d-bfc0-8c97cabca539\") " pod="calico-system/calico-node-ff6hj" May 14 00:01:41.918000 containerd[1485]: time="2025-05-14T00:01:41.917883160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64875fc9ff-krlvv,Uid:77d54731-0ef9-4d61-bed0-533e6ba78a30,Namespace:calico-system,Attempt:0,}" May 14 00:01:41.938765 containerd[1485]: time="2025-05-14T00:01:41.938700522Z" level=info msg="connecting to shim da81c5dd4b9f8bd5e951d9e8d8f79ad5b61c32c0a0df9523fcd2b6cca5d011cb" address="unix:///run/containerd/s/488c28af201b2baf1ca84f87b40752b75d0a237c809bc192c5ec205a01b5cc60" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:41.940293 kubelet[2705]: I0514 00:01:41.940249 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c8ed3b65-eee6-481b-bcd9-c2f7489b7d71-varrun\") pod \"csi-node-driver-c9hj6\" (UID: \"c8ed3b65-eee6-481b-bcd9-c2f7489b7d71\") " pod="calico-system/csi-node-driver-c9hj6" May 14 00:01:41.940408 kubelet[2705]: I0514 00:01:41.940298 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbx8d\" (UniqueName: \"kubernetes.io/projected/c8ed3b65-eee6-481b-bcd9-c2f7489b7d71-kube-api-access-wbx8d\") pod \"csi-node-driver-c9hj6\" (UID: \"c8ed3b65-eee6-481b-bcd9-c2f7489b7d71\") " pod="calico-system/csi-node-driver-c9hj6" May 14 00:01:41.940408 kubelet[2705]: I0514 00:01:41.940385 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c8ed3b65-eee6-481b-bcd9-c2f7489b7d71-socket-dir\") pod \"csi-node-driver-c9hj6\" (UID: \"c8ed3b65-eee6-481b-bcd9-c2f7489b7d71\") " pod="calico-system/csi-node-driver-c9hj6" May 14 00:01:41.940486 kubelet[2705]: I0514 00:01:41.940422 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8ed3b65-eee6-481b-bcd9-c2f7489b7d71-kubelet-dir\") pod \"csi-node-driver-c9hj6\" (UID: \"c8ed3b65-eee6-481b-bcd9-c2f7489b7d71\") " pod="calico-system/csi-node-driver-c9hj6" May 14 00:01:41.940486 kubelet[2705]: I0514 00:01:41.940438 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c8ed3b65-eee6-481b-bcd9-c2f7489b7d71-registration-dir\") pod \"csi-node-driver-c9hj6\" (UID: \"c8ed3b65-eee6-481b-bcd9-c2f7489b7d71\") " pod="calico-system/csi-node-driver-c9hj6" May 14 00:01:41.946299 kubelet[2705]: E0514 00:01:41.945564 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:41.946299 kubelet[2705]: W0514 00:01:41.945586 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:41.946299 kubelet[2705]: E0514 00:01:41.945606 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:41.957568 kubelet[2705]: E0514 00:01:41.957540 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:41.957568 kubelet[2705]: W0514 00:01:41.957563 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:41.957722 kubelet[2705]: E0514 00:01:41.957586 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:41.993902 systemd[1]: Started cri-containerd-da81c5dd4b9f8bd5e951d9e8d8f79ad5b61c32c0a0df9523fcd2b6cca5d011cb.scope - libcontainer container da81c5dd4b9f8bd5e951d9e8d8f79ad5b61c32c0a0df9523fcd2b6cca5d011cb. May 14 00:01:42.007245 containerd[1485]: time="2025-05-14T00:01:42.007203350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ff6hj,Uid:5d0d33c6-7488-4d9d-bfc0-8c97cabca539,Namespace:calico-system,Attempt:0,}" May 14 00:01:42.032548 containerd[1485]: time="2025-05-14T00:01:42.032420603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64875fc9ff-krlvv,Uid:77d54731-0ef9-4d61-bed0-533e6ba78a30,Namespace:calico-system,Attempt:0,} returns sandbox id \"da81c5dd4b9f8bd5e951d9e8d8f79ad5b61c32c0a0df9523fcd2b6cca5d011cb\"" May 14 00:01:42.034397 containerd[1485]: time="2025-05-14T00:01:42.034166730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 00:01:42.041960 containerd[1485]: time="2025-05-14T00:01:42.041915678Z" level=info msg="connecting to shim 0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0" address="unix:///run/containerd/s/034365c20b6e78225dd7f7d36d9fa332543e24ce04d7ec7044d927998b523fc5" namespace=k8s.io protocol=ttrpc version=3 May 14 00:01:42.042090 kubelet[2705]: E0514 00:01:42.041963 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.042090 kubelet[2705]: W0514 00:01:42.041984 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.042090 kubelet[2705]: E0514 00:01:42.042004 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.042293 kubelet[2705]: E0514 00:01:42.042250 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.042293 kubelet[2705]: W0514 00:01:42.042275 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.042293 kubelet[2705]: E0514 00:01:42.042291 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.042529 kubelet[2705]: E0514 00:01:42.042514 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.042529 kubelet[2705]: W0514 00:01:42.042527 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.042599 kubelet[2705]: E0514 00:01:42.042546 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.042802 kubelet[2705]: E0514 00:01:42.042782 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.042802 kubelet[2705]: W0514 00:01:42.042798 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.042873 kubelet[2705]: E0514 00:01:42.042814 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.043011 kubelet[2705]: E0514 00:01:42.042997 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.043011 kubelet[2705]: W0514 00:01:42.043009 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.043078 kubelet[2705]: E0514 00:01:42.043018 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.043203 kubelet[2705]: E0514 00:01:42.043190 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.043203 kubelet[2705]: W0514 00:01:42.043201 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.043281 kubelet[2705]: E0514 00:01:42.043213 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.043401 kubelet[2705]: E0514 00:01:42.043384 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.043401 kubelet[2705]: W0514 00:01:42.043394 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.043473 kubelet[2705]: E0514 00:01:42.043448 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.043585 kubelet[2705]: E0514 00:01:42.043571 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.043585 kubelet[2705]: W0514 00:01:42.043583 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.043669 kubelet[2705]: E0514 00:01:42.043602 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.043926 kubelet[2705]: E0514 00:01:42.043819 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.043926 kubelet[2705]: W0514 00:01:42.043859 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.044034 kubelet[2705]: E0514 00:01:42.043938 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.044034 kubelet[2705]: E0514 00:01:42.044023 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.044034 kubelet[2705]: W0514 00:01:42.044030 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.044338 kubelet[2705]: E0514 00:01:42.044058 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.044338 kubelet[2705]: E0514 00:01:42.044149 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.044338 kubelet[2705]: W0514 00:01:42.044156 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.044338 kubelet[2705]: E0514 00:01:42.044182 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.044338 kubelet[2705]: E0514 00:01:42.044285 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.044338 kubelet[2705]: W0514 00:01:42.044294 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.044338 kubelet[2705]: E0514 00:01:42.044303 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.044498 kubelet[2705]: E0514 00:01:42.044435 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.044498 kubelet[2705]: W0514 00:01:42.044447 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.044498 kubelet[2705]: E0514 00:01:42.044454 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.044801 kubelet[2705]: E0514 00:01:42.044761 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.044801 kubelet[2705]: W0514 00:01:42.044776 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.044801 kubelet[2705]: E0514 00:01:42.044792 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.045674 kubelet[2705]: E0514 00:01:42.045637 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.045674 kubelet[2705]: W0514 00:01:42.045660 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.045810 kubelet[2705]: E0514 00:01:42.045681 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.045946 kubelet[2705]: E0514 00:01:42.045910 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.045946 kubelet[2705]: W0514 00:01:42.045923 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.045946 kubelet[2705]: E0514 00:01:42.045937 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.046213 kubelet[2705]: E0514 00:01:42.046131 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.046213 kubelet[2705]: W0514 00:01:42.046139 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.046213 kubelet[2705]: E0514 00:01:42.046196 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.046305 kubelet[2705]: E0514 00:01:42.046296 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.046305 kubelet[2705]: W0514 00:01:42.046304 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.046455 kubelet[2705]: E0514 00:01:42.046343 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.046487 kubelet[2705]: E0514 00:01:42.046463 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.046487 kubelet[2705]: W0514 00:01:42.046470 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.046528 kubelet[2705]: E0514 00:01:42.046498 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.046843 kubelet[2705]: E0514 00:01:42.046821 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.047035 kubelet[2705]: W0514 00:01:42.046835 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.047035 kubelet[2705]: E0514 00:01:42.046909 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.047159 kubelet[2705]: E0514 00:01:42.047140 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.047159 kubelet[2705]: W0514 00:01:42.047155 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.047236 kubelet[2705]: E0514 00:01:42.047165 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.047445 kubelet[2705]: E0514 00:01:42.047426 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.047508 kubelet[2705]: W0514 00:01:42.047456 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.047508 kubelet[2705]: E0514 00:01:42.047471 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.047824 kubelet[2705]: E0514 00:01:42.047772 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.047824 kubelet[2705]: W0514 00:01:42.047785 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.048010 kubelet[2705]: E0514 00:01:42.047853 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.048010 kubelet[2705]: E0514 00:01:42.047927 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.048010 kubelet[2705]: W0514 00:01:42.047934 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.048010 kubelet[2705]: E0514 00:01:42.047941 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.048166 kubelet[2705]: E0514 00:01:42.048104 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.048166 kubelet[2705]: W0514 00:01:42.048113 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.048166 kubelet[2705]: E0514 00:01:42.048121 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.056298 kubelet[2705]: E0514 00:01:42.056271 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:42.056298 kubelet[2705]: W0514 00:01:42.056290 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:42.056513 kubelet[2705]: E0514 00:01:42.056307 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:42.065837 systemd[1]: Started cri-containerd-0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0.scope - libcontainer container 0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0. May 14 00:01:42.122253 containerd[1485]: time="2025-05-14T00:01:42.122212775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ff6hj,Uid:5d0d33c6-7488-4d9d-bfc0-8c97cabca539,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0\"" May 14 00:01:43.294566 kubelet[2705]: E0514 00:01:43.294184 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9hj6" podUID="c8ed3b65-eee6-481b-bcd9-c2f7489b7d71" May 14 00:01:43.695090 containerd[1485]: time="2025-05-14T00:01:43.694968465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:43.696069 containerd[1485]: time="2025-05-14T00:01:43.696021428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 00:01:43.697673 containerd[1485]: time="2025-05-14T00:01:43.697133832Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:43.699284 containerd[1485]: time="2025-05-14T00:01:43.699227719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:43.699832 containerd[1485]: time="2025-05-14T00:01:43.699804481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.665603471s" May 14 00:01:43.700007 containerd[1485]: time="2025-05-14T00:01:43.699910002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 00:01:43.701214 containerd[1485]: time="2025-05-14T00:01:43.701182926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 00:01:43.716267 containerd[1485]: time="2025-05-14T00:01:43.716216658Z" level=info msg="CreateContainer within sandbox \"da81c5dd4b9f8bd5e951d9e8d8f79ad5b61c32c0a0df9523fcd2b6cca5d011cb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 00:01:43.722889 containerd[1485]: time="2025-05-14T00:01:43.722839401Z" level=info msg="Container 2aca7db60eba5628e733a98778c8d43a1dd10364528783d9c91eeb08d817dddf: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:43.729587 containerd[1485]: time="2025-05-14T00:01:43.729537744Z" level=info msg="CreateContainer within sandbox \"da81c5dd4b9f8bd5e951d9e8d8f79ad5b61c32c0a0df9523fcd2b6cca5d011cb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2aca7db60eba5628e733a98778c8d43a1dd10364528783d9c91eeb08d817dddf\"" May 14 00:01:43.731745 containerd[1485]: time="2025-05-14T00:01:43.731719392Z" level=info msg="StartContainer for \"2aca7db60eba5628e733a98778c8d43a1dd10364528783d9c91eeb08d817dddf\"" May 14 00:01:43.733022 containerd[1485]: time="2025-05-14T00:01:43.732796556Z" level=info msg="connecting to shim 2aca7db60eba5628e733a98778c8d43a1dd10364528783d9c91eeb08d817dddf" address="unix:///run/containerd/s/488c28af201b2baf1ca84f87b40752b75d0a237c809bc192c5ec205a01b5cc60" protocol=ttrpc version=3 May 14 00:01:43.753836 systemd[1]: Started cri-containerd-2aca7db60eba5628e733a98778c8d43a1dd10364528783d9c91eeb08d817dddf.scope - libcontainer container 2aca7db60eba5628e733a98778c8d43a1dd10364528783d9c91eeb08d817dddf. May 14 00:01:43.796729 containerd[1485]: time="2025-05-14T00:01:43.796687217Z" level=info msg="StartContainer for \"2aca7db60eba5628e733a98778c8d43a1dd10364528783d9c91eeb08d817dddf\" returns successfully" May 14 00:01:44.378484 kubelet[2705]: I0514 00:01:44.378427 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64875fc9ff-krlvv" podStartSLOduration=1.711629075 podStartE2EDuration="3.37840871s" podCreationTimestamp="2025-05-14 00:01:41 +0000 UTC" firstStartedPulling="2025-05-14 00:01:42.033891209 +0000 UTC m=+21.817200560" lastFinishedPulling="2025-05-14 00:01:43.700670844 +0000 UTC m=+23.483980195" observedRunningTime="2025-05-14 00:01:44.37836175 +0000 UTC m=+24.161671101" watchObservedRunningTime="2025-05-14 00:01:44.37840871 +0000 UTC m=+24.161718061" May 14 00:01:44.459438 kubelet[2705]: E0514 00:01:44.459403 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.459438 kubelet[2705]: W0514 00:01:44.459433 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.459602 kubelet[2705]: E0514 00:01:44.459452 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.459711 kubelet[2705]: E0514 00:01:44.459698 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.459711 kubelet[2705]: W0514 00:01:44.459710 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.459773 kubelet[2705]: E0514 00:01:44.459720 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.459897 kubelet[2705]: E0514 00:01:44.459884 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.459897 kubelet[2705]: W0514 00:01:44.459896 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.459946 kubelet[2705]: E0514 00:01:44.459904 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.460077 kubelet[2705]: E0514 00:01:44.460065 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.460077 kubelet[2705]: W0514 00:01:44.460077 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.460126 kubelet[2705]: E0514 00:01:44.460085 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.460253 kubelet[2705]: E0514 00:01:44.460234 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.460253 kubelet[2705]: W0514 00:01:44.460244 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.460314 kubelet[2705]: E0514 00:01:44.460262 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.460405 kubelet[2705]: E0514 00:01:44.460393 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.460405 kubelet[2705]: W0514 00:01:44.460403 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.460460 kubelet[2705]: E0514 00:01:44.460411 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.460566 kubelet[2705]: E0514 00:01:44.460549 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.460566 kubelet[2705]: W0514 00:01:44.460560 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.460613 kubelet[2705]: E0514 00:01:44.460569 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.460732 kubelet[2705]: E0514 00:01:44.460721 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.460732 kubelet[2705]: W0514 00:01:44.460732 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.460786 kubelet[2705]: E0514 00:01:44.460745 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.460928 kubelet[2705]: E0514 00:01:44.460915 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.460954 kubelet[2705]: W0514 00:01:44.460928 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.460954 kubelet[2705]: E0514 00:01:44.460938 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.461129 kubelet[2705]: E0514 00:01:44.461116 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.461129 kubelet[2705]: W0514 00:01:44.461129 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.461189 kubelet[2705]: E0514 00:01:44.461139 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.461328 kubelet[2705]: E0514 00:01:44.461316 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.461328 kubelet[2705]: W0514 00:01:44.461328 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.461376 kubelet[2705]: E0514 00:01:44.461336 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.461493 kubelet[2705]: E0514 00:01:44.461481 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.461493 kubelet[2705]: W0514 00:01:44.461492 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.461542 kubelet[2705]: E0514 00:01:44.461500 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.461686 kubelet[2705]: E0514 00:01:44.461643 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.461686 kubelet[2705]: W0514 00:01:44.461663 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.461686 kubelet[2705]: E0514 00:01:44.461671 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.461836 kubelet[2705]: E0514 00:01:44.461823 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.461836 kubelet[2705]: W0514 00:01:44.461835 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.461885 kubelet[2705]: E0514 00:01:44.461843 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.461993 kubelet[2705]: E0514 00:01:44.461982 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.461993 kubelet[2705]: W0514 00:01:44.461991 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.462037 kubelet[2705]: E0514 00:01:44.461999 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.560136 kubelet[2705]: E0514 00:01:44.560093 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.560136 kubelet[2705]: W0514 00:01:44.560118 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.560136 kubelet[2705]: E0514 00:01:44.560138 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.560400 kubelet[2705]: E0514 00:01:44.560374 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.560400 kubelet[2705]: W0514 00:01:44.560386 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.560463 kubelet[2705]: E0514 00:01:44.560402 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.560634 kubelet[2705]: E0514 00:01:44.560602 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.560634 kubelet[2705]: W0514 00:01:44.560622 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.560713 kubelet[2705]: E0514 00:01:44.560641 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.560888 kubelet[2705]: E0514 00:01:44.560864 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.560888 kubelet[2705]: W0514 00:01:44.560877 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.560942 kubelet[2705]: E0514 00:01:44.560893 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.561086 kubelet[2705]: E0514 00:01:44.561066 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.561086 kubelet[2705]: W0514 00:01:44.561079 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.561136 kubelet[2705]: E0514 00:01:44.561094 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.561324 kubelet[2705]: E0514 00:01:44.561312 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.561324 kubelet[2705]: W0514 00:01:44.561322 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.561388 kubelet[2705]: E0514 00:01:44.561337 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.561604 kubelet[2705]: E0514 00:01:44.561585 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.561604 kubelet[2705]: W0514 00:01:44.561602 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.561708 kubelet[2705]: E0514 00:01:44.561630 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.561863 kubelet[2705]: E0514 00:01:44.561849 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.561863 kubelet[2705]: W0514 00:01:44.561862 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.561953 kubelet[2705]: E0514 00:01:44.561877 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.562069 kubelet[2705]: E0514 00:01:44.562056 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.562069 kubelet[2705]: W0514 00:01:44.562066 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.562116 kubelet[2705]: E0514 00:01:44.562080 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.562254 kubelet[2705]: E0514 00:01:44.562233 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.562254 kubelet[2705]: W0514 00:01:44.562250 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.562308 kubelet[2705]: E0514 00:01:44.562267 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.562479 kubelet[2705]: E0514 00:01:44.562464 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.562479 kubelet[2705]: W0514 00:01:44.562477 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.562534 kubelet[2705]: E0514 00:01:44.562490 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.562748 kubelet[2705]: E0514 00:01:44.562732 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.562748 kubelet[2705]: W0514 00:01:44.562747 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.562818 kubelet[2705]: E0514 00:01:44.562764 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.562976 kubelet[2705]: E0514 00:01:44.562964 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.562976 kubelet[2705]: W0514 00:01:44.562974 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.563029 kubelet[2705]: E0514 00:01:44.562986 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.563171 kubelet[2705]: E0514 00:01:44.563156 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.563209 kubelet[2705]: W0514 00:01:44.563171 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.563209 kubelet[2705]: E0514 00:01:44.563189 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.563401 kubelet[2705]: E0514 00:01:44.563387 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.563401 kubelet[2705]: W0514 00:01:44.563399 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.563461 kubelet[2705]: E0514 00:01:44.563409 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.563571 kubelet[2705]: E0514 00:01:44.563555 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.563571 kubelet[2705]: W0514 00:01:44.563566 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.563637 kubelet[2705]: E0514 00:01:44.563590 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.563786 kubelet[2705]: E0514 00:01:44.563774 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.563786 kubelet[2705]: W0514 00:01:44.563784 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.563845 kubelet[2705]: E0514 00:01:44.563792 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.564306 kubelet[2705]: E0514 00:01:44.564271 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:01:44.564306 kubelet[2705]: W0514 00:01:44.564289 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:01:44.564306 kubelet[2705]: E0514 00:01:44.564301 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:01:44.886909 containerd[1485]: time="2025-05-14T00:01:44.886853001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:44.887537 containerd[1485]: time="2025-05-14T00:01:44.887425803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 00:01:44.888275 containerd[1485]: time="2025-05-14T00:01:44.888240725Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:44.890230 containerd[1485]: time="2025-05-14T00:01:44.890176492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:44.890740 containerd[1485]: time="2025-05-14T00:01:44.890706653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.189489607s" May 14 00:01:44.890894 containerd[1485]: time="2025-05-14T00:01:44.890744373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 00:01:44.895790 containerd[1485]: time="2025-05-14T00:01:44.893617743Z" level=info msg="CreateContainer within sandbox \"0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 00:01:44.900638 containerd[1485]: time="2025-05-14T00:01:44.900584685Z" level=info msg="Container 16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:44.914287 containerd[1485]: time="2025-05-14T00:01:44.914226010Z" level=info msg="CreateContainer within sandbox \"0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8\"" May 14 00:01:44.916120 containerd[1485]: time="2025-05-14T00:01:44.916038656Z" level=info msg="StartContainer for \"16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8\"" May 14 00:01:44.918141 containerd[1485]: time="2025-05-14T00:01:44.917945462Z" level=info msg="connecting to shim 16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8" address="unix:///run/containerd/s/034365c20b6e78225dd7f7d36d9fa332543e24ce04d7ec7044d927998b523fc5" protocol=ttrpc version=3 May 14 00:01:44.941852 systemd[1]: Started cri-containerd-16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8.scope - libcontainer container 16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8. May 14 00:01:44.976971 containerd[1485]: time="2025-05-14T00:01:44.976922333Z" level=info msg="StartContainer for \"16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8\" returns successfully" May 14 00:01:44.996814 systemd[1]: cri-containerd-16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8.scope: Deactivated successfully. May 14 00:01:45.015887 containerd[1485]: time="2025-05-14T00:01:45.015834417Z" level=info msg="received exit event container_id:\"16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8\" id:\"16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8\" pid:3317 exited_at:{seconds:1747180905 nanos:12186845}" May 14 00:01:45.016075 containerd[1485]: time="2025-05-14T00:01:45.016040537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8\" id:\"16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8\" pid:3317 exited_at:{seconds:1747180905 nanos:12186845}" May 14 00:01:45.057454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b5714f7f7b935eec9a078c07814aa218f2fd6ef77835c46f882c0a579913e8-rootfs.mount: Deactivated successfully. May 14 00:01:45.293439 kubelet[2705]: E0514 00:01:45.293206 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9hj6" podUID="c8ed3b65-eee6-481b-bcd9-c2f7489b7d71" May 14 00:01:45.371130 kubelet[2705]: I0514 00:01:45.370582 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:45.372024 containerd[1485]: time="2025-05-14T00:01:45.371968541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 00:01:47.293499 kubelet[2705]: E0514 00:01:47.293415 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9hj6" podUID="c8ed3b65-eee6-481b-bcd9-c2f7489b7d71" May 14 00:01:48.721934 containerd[1485]: time="2025-05-14T00:01:48.721883673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:48.722840 containerd[1485]: time="2025-05-14T00:01:48.722660434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 00:01:48.723664 containerd[1485]: time="2025-05-14T00:01:48.723521237Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:48.725792 containerd[1485]: time="2025-05-14T00:01:48.725459121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:48.728462 containerd[1485]: time="2025-05-14T00:01:48.727096526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.355085545s" May 14 00:01:48.728462 containerd[1485]: time="2025-05-14T00:01:48.727140726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 00:01:48.731695 containerd[1485]: time="2025-05-14T00:01:48.731139416Z" level=info msg="CreateContainer within sandbox \"0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:01:48.739947 containerd[1485]: time="2025-05-14T00:01:48.739893038Z" level=info msg="Container 4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:48.747962 containerd[1485]: time="2025-05-14T00:01:48.747914418Z" level=info msg="CreateContainer within sandbox \"0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4\"" May 14 00:01:48.749713 containerd[1485]: time="2025-05-14T00:01:48.748889460Z" level=info msg="StartContainer for \"4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4\"" May 14 00:01:48.751538 containerd[1485]: time="2025-05-14T00:01:48.751432027Z" level=info msg="connecting to shim 4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4" address="unix:///run/containerd/s/034365c20b6e78225dd7f7d36d9fa332543e24ce04d7ec7044d927998b523fc5" protocol=ttrpc version=3 May 14 00:01:48.772827 systemd[1]: Started cri-containerd-4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4.scope - libcontainer container 4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4. May 14 00:01:48.809560 containerd[1485]: time="2025-05-14T00:01:48.808240569Z" level=info msg="StartContainer for \"4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4\" returns successfully" May 14 00:01:49.293592 kubelet[2705]: E0514 00:01:49.293035 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9hj6" podUID="c8ed3b65-eee6-481b-bcd9-c2f7489b7d71" May 14 00:01:49.449932 systemd[1]: cri-containerd-4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4.scope: Deactivated successfully. May 14 00:01:49.450599 systemd[1]: cri-containerd-4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4.scope: Consumed 471ms CPU time, 157.3M memory peak, 4K read from disk, 150.3M written to disk. May 14 00:01:49.466826 containerd[1485]: time="2025-05-14T00:01:49.466329347Z" level=info msg="received exit event container_id:\"4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4\" id:\"4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4\" pid:3378 exited_at:{seconds:1747180909 nanos:466092667}" May 14 00:01:49.466826 containerd[1485]: time="2025-05-14T00:01:49.466467548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4\" id:\"4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4\" pid:3378 exited_at:{seconds:1747180909 nanos:466092667}" May 14 00:01:49.480148 kubelet[2705]: I0514 00:01:49.478918 2705 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:01:49.489318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ff1c94edef850393a5bf2d74e41794626ccc8d5e490ed14d9e716ab7b32bad4-rootfs.mount: Deactivated successfully. May 14 00:01:49.613760 kubelet[2705]: I0514 00:01:49.613072 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:49.613760 kubelet[2705]: I0514 00:01:49.613701 2705 topology_manager.go:215] "Topology Admit Handler" podUID="e6f31dde-a3a0-4b1f-90aa-0add754aeab0" podNamespace="calico-apiserver" podName="calico-apiserver-bd56b8668-7ppmc" May 14 00:01:49.614040 kubelet[2705]: I0514 00:01:49.613981 2705 topology_manager.go:215] "Topology Admit Handler" podUID="4542482e-1851-4643-8bf4-c7f756cc0345" podNamespace="calico-apiserver" podName="calico-apiserver-bd56b8668-528js" May 14 00:01:49.618146 kubelet[2705]: I0514 00:01:49.615380 2705 topology_manager.go:215] "Topology Admit Handler" podUID="1ba7051e-a376-42a7-9404-d780e62f7c49" podNamespace="kube-system" podName="coredns-7db6d8ff4d-srbbq" May 14 00:01:49.618146 kubelet[2705]: I0514 00:01:49.615552 2705 topology_manager.go:215] "Topology Admit Handler" podUID="e599a8f5-76e1-4fe5-8bdd-13714150c55c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6fl9s" May 14 00:01:49.618146 kubelet[2705]: I0514 00:01:49.615673 2705 topology_manager.go:215] "Topology Admit Handler" podUID="8f4760a2-09e7-43d4-a406-e2aab18fa5e1" podNamespace="calico-system" podName="calico-kube-controllers-7ddcf4fbf9-7nlv4" May 14 00:01:49.622643 systemd[1]: Created slice kubepods-besteffort-pode6f31dde_a3a0_4b1f_90aa_0add754aeab0.slice - libcontainer container kubepods-besteffort-pode6f31dde_a3a0_4b1f_90aa_0add754aeab0.slice. May 14 00:01:49.641785 systemd[1]: Created slice kubepods-besteffort-pod4542482e_1851_4643_8bf4_c7f756cc0345.slice - libcontainer container kubepods-besteffort-pod4542482e_1851_4643_8bf4_c7f756cc0345.slice. May 14 00:01:49.649922 systemd[1]: Created slice kubepods-besteffort-pod8f4760a2_09e7_43d4_a406_e2aab18fa5e1.slice - libcontainer container kubepods-besteffort-pod8f4760a2_09e7_43d4_a406_e2aab18fa5e1.slice. May 14 00:01:49.656112 systemd[1]: Created slice kubepods-burstable-pod1ba7051e_a376_42a7_9404_d780e62f7c49.slice - libcontainer container kubepods-burstable-pod1ba7051e_a376_42a7_9404_d780e62f7c49.slice. May 14 00:01:49.663961 systemd[1]: Created slice kubepods-burstable-pode599a8f5_76e1_4fe5_8bdd_13714150c55c.slice - libcontainer container kubepods-burstable-pode599a8f5_76e1_4fe5_8bdd_13714150c55c.slice. May 14 00:01:49.730841 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:33256.service - OpenSSH per-connection server daemon (10.0.0.1:33256). May 14 00:01:49.789628 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 33256 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:01:49.791132 sshd-session[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:49.795608 systemd-logind[1466]: New session 8 of user core. May 14 00:01:49.804818 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:01:49.813813 kubelet[2705]: I0514 00:01:49.813767 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e599a8f5-76e1-4fe5-8bdd-13714150c55c-config-volume\") pod \"coredns-7db6d8ff4d-6fl9s\" (UID: \"e599a8f5-76e1-4fe5-8bdd-13714150c55c\") " pod="kube-system/coredns-7db6d8ff4d-6fl9s" May 14 00:01:49.814181 kubelet[2705]: I0514 00:01:49.813812 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6kr2\" (UniqueName: \"kubernetes.io/projected/e599a8f5-76e1-4fe5-8bdd-13714150c55c-kube-api-access-m6kr2\") pod \"coredns-7db6d8ff4d-6fl9s\" (UID: \"e599a8f5-76e1-4fe5-8bdd-13714150c55c\") " pod="kube-system/coredns-7db6d8ff4d-6fl9s" May 14 00:01:49.814378 kubelet[2705]: I0514 00:01:49.814357 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f4760a2-09e7-43d4-a406-e2aab18fa5e1-tigera-ca-bundle\") pod \"calico-kube-controllers-7ddcf4fbf9-7nlv4\" (UID: \"8f4760a2-09e7-43d4-a406-e2aab18fa5e1\") " pod="calico-system/calico-kube-controllers-7ddcf4fbf9-7nlv4" May 14 00:01:49.814430 kubelet[2705]: I0514 00:01:49.814393 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6f31dde-a3a0-4b1f-90aa-0add754aeab0-calico-apiserver-certs\") pod \"calico-apiserver-bd56b8668-7ppmc\" (UID: \"e6f31dde-a3a0-4b1f-90aa-0add754aeab0\") " pod="calico-apiserver/calico-apiserver-bd56b8668-7ppmc" May 14 00:01:49.814476 kubelet[2705]: I0514 00:01:49.814436 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j8pq\" (UniqueName: \"kubernetes.io/projected/e6f31dde-a3a0-4b1f-90aa-0add754aeab0-kube-api-access-9j8pq\") pod \"calico-apiserver-bd56b8668-7ppmc\" (UID: \"e6f31dde-a3a0-4b1f-90aa-0add754aeab0\") " pod="calico-apiserver/calico-apiserver-bd56b8668-7ppmc" May 14 00:01:49.814476 kubelet[2705]: I0514 00:01:49.814464 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5hvz\" (UniqueName: \"kubernetes.io/projected/4542482e-1851-4643-8bf4-c7f756cc0345-kube-api-access-n5hvz\") pod \"calico-apiserver-bd56b8668-528js\" (UID: \"4542482e-1851-4643-8bf4-c7f756cc0345\") " pod="calico-apiserver/calico-apiserver-bd56b8668-528js" May 14 00:01:49.814519 kubelet[2705]: I0514 00:01:49.814488 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ba7051e-a376-42a7-9404-d780e62f7c49-config-volume\") pod \"coredns-7db6d8ff4d-srbbq\" (UID: \"1ba7051e-a376-42a7-9404-d780e62f7c49\") " pod="kube-system/coredns-7db6d8ff4d-srbbq" May 14 00:01:49.814642 kubelet[2705]: I0514 00:01:49.814545 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58mm\" (UniqueName: \"kubernetes.io/projected/8f4760a2-09e7-43d4-a406-e2aab18fa5e1-kube-api-access-x58mm\") pod \"calico-kube-controllers-7ddcf4fbf9-7nlv4\" (UID: \"8f4760a2-09e7-43d4-a406-e2aab18fa5e1\") " pod="calico-system/calico-kube-controllers-7ddcf4fbf9-7nlv4" May 14 00:01:49.814642 kubelet[2705]: I0514 00:01:49.814597 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6l2n\" (UniqueName: \"kubernetes.io/projected/1ba7051e-a376-42a7-9404-d780e62f7c49-kube-api-access-v6l2n\") pod \"coredns-7db6d8ff4d-srbbq\" (UID: \"1ba7051e-a376-42a7-9404-d780e62f7c49\") " pod="kube-system/coredns-7db6d8ff4d-srbbq" May 14 00:01:49.814642 kubelet[2705]: I0514 00:01:49.814656 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4542482e-1851-4643-8bf4-c7f756cc0345-calico-apiserver-certs\") pod \"calico-apiserver-bd56b8668-528js\" (UID: \"4542482e-1851-4643-8bf4-c7f756cc0345\") " pod="calico-apiserver/calico-apiserver-bd56b8668-528js" May 14 00:01:49.946867 containerd[1485]: time="2025-05-14T00:01:49.946752237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd56b8668-528js,Uid:4542482e-1851-4643-8bf4-c7f756cc0345,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:49.951178 sshd[3415]: Connection closed by 10.0.0.1 port 33256 May 14 00:01:49.951701 sshd-session[3413]: pam_unix(sshd:session): session closed for user core May 14 00:01:49.953832 containerd[1485]: time="2025-05-14T00:01:49.953543373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ddcf4fbf9-7nlv4,Uid:8f4760a2-09e7-43d4-a406-e2aab18fa5e1,Namespace:calico-system,Attempt:0,}" May 14 00:01:49.954769 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:33256.service: Deactivated successfully. May 14 00:01:49.961489 containerd[1485]: time="2025-05-14T00:01:49.961437431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srbbq,Uid:1ba7051e-a376-42a7-9404-d780e62f7c49,Namespace:kube-system,Attempt:0,}" May 14 00:01:49.961631 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:01:49.965233 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. May 14 00:01:49.967032 systemd-logind[1466]: Removed session 8. May 14 00:01:49.994086 containerd[1485]: time="2025-05-14T00:01:49.990863421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fl9s,Uid:e599a8f5-76e1-4fe5-8bdd-13714150c55c,Namespace:kube-system,Attempt:0,}" May 14 00:01:50.228777 containerd[1485]: time="2025-05-14T00:01:50.228735747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd56b8668-7ppmc,Uid:e6f31dde-a3a0-4b1f-90aa-0add754aeab0,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:50.353562 containerd[1485]: time="2025-05-14T00:01:50.353502902Z" level=error msg="Failed to destroy network for sandbox \"15b3d7c90b90a14b5c96a80e022cfbed6f8979a013825abd79654600d84b0b52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.355903 containerd[1485]: time="2025-05-14T00:01:50.355737427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd56b8668-528js,Uid:4542482e-1851-4643-8bf4-c7f756cc0345,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b3d7c90b90a14b5c96a80e022cfbed6f8979a013825abd79654600d84b0b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.359781 containerd[1485]: time="2025-05-14T00:01:50.358184312Z" level=error msg="Failed to destroy network for sandbox \"853b53b02510b31199356972b45f18f9d1bd3c080b00375ff26280a20bffd228\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.364725 kubelet[2705]: E0514 00:01:50.361143 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b3d7c90b90a14b5c96a80e022cfbed6f8979a013825abd79654600d84b0b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.364725 kubelet[2705]: E0514 00:01:50.361956 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b3d7c90b90a14b5c96a80e022cfbed6f8979a013825abd79654600d84b0b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd56b8668-528js" May 14 00:01:50.364725 kubelet[2705]: E0514 00:01:50.361989 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b3d7c90b90a14b5c96a80e022cfbed6f8979a013825abd79654600d84b0b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd56b8668-528js" May 14 00:01:50.365329 kubelet[2705]: E0514 00:01:50.362046 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd56b8668-528js_calico-apiserver(4542482e-1851-4643-8bf4-c7f756cc0345)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd56b8668-528js_calico-apiserver(4542482e-1851-4643-8bf4-c7f756cc0345)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15b3d7c90b90a14b5c96a80e022cfbed6f8979a013825abd79654600d84b0b52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd56b8668-528js" podUID="4542482e-1851-4643-8bf4-c7f756cc0345" May 14 00:01:50.367882 containerd[1485]: time="2025-05-14T00:01:50.367840694Z" level=error msg="Failed to destroy network for sandbox \"21ec13e5ad57b6817636cab01b285b0e149810629d3a0bbf789fdf010cafaebc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.368432 containerd[1485]: time="2025-05-14T00:01:50.368394775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ddcf4fbf9-7nlv4,Uid:8f4760a2-09e7-43d4-a406-e2aab18fa5e1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"853b53b02510b31199356972b45f18f9d1bd3c080b00375ff26280a20bffd228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.369073 kubelet[2705]: E0514 00:01:50.368616 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853b53b02510b31199356972b45f18f9d1bd3c080b00375ff26280a20bffd228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.369073 kubelet[2705]: E0514 00:01:50.368677 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853b53b02510b31199356972b45f18f9d1bd3c080b00375ff26280a20bffd228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ddcf4fbf9-7nlv4" May 14 00:01:50.369073 kubelet[2705]: E0514 00:01:50.368698 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853b53b02510b31199356972b45f18f9d1bd3c080b00375ff26280a20bffd228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ddcf4fbf9-7nlv4" May 14 00:01:50.370322 kubelet[2705]: E0514 00:01:50.368775 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7ddcf4fbf9-7nlv4_calico-system(8f4760a2-09e7-43d4-a406-e2aab18fa5e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7ddcf4fbf9-7nlv4_calico-system(8f4760a2-09e7-43d4-a406-e2aab18fa5e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"853b53b02510b31199356972b45f18f9d1bd3c080b00375ff26280a20bffd228\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ddcf4fbf9-7nlv4" podUID="8f4760a2-09e7-43d4-a406-e2aab18fa5e1" May 14 00:01:50.370519 containerd[1485]: time="2025-05-14T00:01:50.370203699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fl9s,Uid:e599a8f5-76e1-4fe5-8bdd-13714150c55c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ec13e5ad57b6817636cab01b285b0e149810629d3a0bbf789fdf010cafaebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.371282 kubelet[2705]: E0514 00:01:50.370778 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ec13e5ad57b6817636cab01b285b0e149810629d3a0bbf789fdf010cafaebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.371361 kubelet[2705]: E0514 00:01:50.371316 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ec13e5ad57b6817636cab01b285b0e149810629d3a0bbf789fdf010cafaebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6fl9s" May 14 00:01:50.371361 kubelet[2705]: E0514 00:01:50.371346 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ec13e5ad57b6817636cab01b285b0e149810629d3a0bbf789fdf010cafaebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6fl9s" May 14 00:01:50.371468 kubelet[2705]: E0514 00:01:50.371390 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6fl9s_kube-system(e599a8f5-76e1-4fe5-8bdd-13714150c55c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6fl9s_kube-system(e599a8f5-76e1-4fe5-8bdd-13714150c55c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21ec13e5ad57b6817636cab01b285b0e149810629d3a0bbf789fdf010cafaebc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6fl9s" podUID="e599a8f5-76e1-4fe5-8bdd-13714150c55c" May 14 00:01:50.372142 containerd[1485]: time="2025-05-14T00:01:50.372103183Z" level=error msg="Failed to destroy network for sandbox \"09dd06a272130c9325ed22845fffd3045e98d8edc74e297554c6807a00a7022f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.373428 containerd[1485]: time="2025-05-14T00:01:50.373262826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srbbq,Uid:1ba7051e-a376-42a7-9404-d780e62f7c49,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09dd06a272130c9325ed22845fffd3045e98d8edc74e297554c6807a00a7022f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.373723 kubelet[2705]: E0514 00:01:50.373690 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09dd06a272130c9325ed22845fffd3045e98d8edc74e297554c6807a00a7022f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.373806 kubelet[2705]: E0514 00:01:50.373735 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09dd06a272130c9325ed22845fffd3045e98d8edc74e297554c6807a00a7022f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srbbq" May 14 00:01:50.373806 kubelet[2705]: E0514 00:01:50.373752 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09dd06a272130c9325ed22845fffd3045e98d8edc74e297554c6807a00a7022f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srbbq" May 14 00:01:50.373806 kubelet[2705]: E0514 00:01:50.373793 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-srbbq_kube-system(1ba7051e-a376-42a7-9404-d780e62f7c49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-srbbq_kube-system(1ba7051e-a376-42a7-9404-d780e62f7c49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09dd06a272130c9325ed22845fffd3045e98d8edc74e297554c6807a00a7022f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-srbbq" podUID="1ba7051e-a376-42a7-9404-d780e62f7c49" May 14 00:01:50.377798 containerd[1485]: time="2025-05-14T00:01:50.377761155Z" level=error msg="Failed to destroy network for sandbox \"ac2b84727234c1b84c3606f933c01f6644ce45c4c884708cf044495a6a85bcae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.382304 containerd[1485]: time="2025-05-14T00:01:50.382158885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd56b8668-7ppmc,Uid:e6f31dde-a3a0-4b1f-90aa-0add754aeab0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2b84727234c1b84c3606f933c01f6644ce45c4c884708cf044495a6a85bcae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.382890 kubelet[2705]: E0514 00:01:50.382343 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2b84727234c1b84c3606f933c01f6644ce45c4c884708cf044495a6a85bcae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:50.382890 kubelet[2705]: E0514 00:01:50.382386 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2b84727234c1b84c3606f933c01f6644ce45c4c884708cf044495a6a85bcae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd56b8668-7ppmc" May 14 00:01:50.382890 kubelet[2705]: E0514 00:01:50.382404 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2b84727234c1b84c3606f933c01f6644ce45c4c884708cf044495a6a85bcae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd56b8668-7ppmc" May 14 00:01:50.383483 kubelet[2705]: E0514 00:01:50.382440 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd56b8668-7ppmc_calico-apiserver(e6f31dde-a3a0-4b1f-90aa-0add754aeab0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd56b8668-7ppmc_calico-apiserver(e6f31dde-a3a0-4b1f-90aa-0add754aeab0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac2b84727234c1b84c3606f933c01f6644ce45c4c884708cf044495a6a85bcae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd56b8668-7ppmc" podUID="e6f31dde-a3a0-4b1f-90aa-0add754aeab0" May 14 00:01:50.393388 containerd[1485]: time="2025-05-14T00:01:50.392976189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 00:01:50.921451 systemd[1]: run-netns-cni\x2dc2505e61\x2deefe\x2def87\x2d197b\x2d03d4bf37b912.mount: Deactivated successfully. May 14 00:01:50.921574 systemd[1]: run-netns-cni\x2d97e1d7c5\x2df753\x2dcb6d\x2d0c72\x2d310d890b18c2.mount: Deactivated successfully. May 14 00:01:51.306456 systemd[1]: Created slice kubepods-besteffort-podc8ed3b65_eee6_481b_bcd9_c2f7489b7d71.slice - libcontainer container kubepods-besteffort-podc8ed3b65_eee6_481b_bcd9_c2f7489b7d71.slice. May 14 00:01:51.314599 containerd[1485]: time="2025-05-14T00:01:51.314172176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9hj6,Uid:c8ed3b65-eee6-481b-bcd9-c2f7489b7d71,Namespace:calico-system,Attempt:0,}" May 14 00:01:51.378269 containerd[1485]: time="2025-05-14T00:01:51.378074869Z" level=error msg="Failed to destroy network for sandbox \"5560601eef96077710a41a4d4f746b352f9cd1965abccff4421a0ed5cc0b6f30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:51.379623 containerd[1485]: time="2025-05-14T00:01:51.379565672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9hj6,Uid:c8ed3b65-eee6-481b-bcd9-c2f7489b7d71,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5560601eef96077710a41a4d4f746b352f9cd1965abccff4421a0ed5cc0b6f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:51.380755 kubelet[2705]: E0514 00:01:51.379807 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5560601eef96077710a41a4d4f746b352f9cd1965abccff4421a0ed5cc0b6f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:51.380755 kubelet[2705]: E0514 00:01:51.379867 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5560601eef96077710a41a4d4f746b352f9cd1965abccff4421a0ed5cc0b6f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c9hj6" May 14 00:01:51.380755 kubelet[2705]: E0514 00:01:51.379887 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5560601eef96077710a41a4d4f746b352f9cd1965abccff4421a0ed5cc0b6f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c9hj6" May 14 00:01:51.380450 systemd[1]: run-netns-cni\x2da8160695\x2dfb59\x2dfadd\x2d39aa\x2d498e22bec4a9.mount: Deactivated successfully. May 14 00:01:51.381178 kubelet[2705]: E0514 00:01:51.379926 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c9hj6_calico-system(c8ed3b65-eee6-481b-bcd9-c2f7489b7d71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c9hj6_calico-system(c8ed3b65-eee6-481b-bcd9-c2f7489b7d71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5560601eef96077710a41a4d4f746b352f9cd1965abccff4421a0ed5cc0b6f30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c9hj6" podUID="c8ed3b65-eee6-481b-bcd9-c2f7489b7d71" May 14 00:01:53.687783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872962555.mount: Deactivated successfully. May 14 00:01:53.972148 containerd[1485]: time="2025-05-14T00:01:53.972097858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:53.972705 containerd[1485]: time="2025-05-14T00:01:53.972626619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 00:01:53.973388 containerd[1485]: time="2025-05-14T00:01:53.973358340Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:53.975141 containerd[1485]: time="2025-05-14T00:01:53.975115943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:53.975627 containerd[1485]: time="2025-05-14T00:01:53.975601904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.582543315s" May 14 00:01:53.975752 containerd[1485]: time="2025-05-14T00:01:53.975635504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 00:01:53.988268 containerd[1485]: time="2025-05-14T00:01:53.987959527Z" level=info msg="CreateContainer within sandbox \"0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 00:01:54.001217 containerd[1485]: time="2025-05-14T00:01:54.001170750Z" level=info msg="Container e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af: CDI devices from CRI Config.CDIDevices: []" May 14 00:01:54.019150 containerd[1485]: time="2025-05-14T00:01:54.019113861Z" level=info msg="CreateContainer within sandbox \"0b54d6a83e98ebb4d89bccb92c3e9c6847cb18d98dbddeef94db57858d1e1fe0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af\"" May 14 00:01:54.019761 containerd[1485]: time="2025-05-14T00:01:54.019637102Z" level=info msg="StartContainer for \"e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af\"" May 14 00:01:54.020989 containerd[1485]: time="2025-05-14T00:01:54.020958864Z" level=info msg="connecting to shim e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af" address="unix:///run/containerd/s/034365c20b6e78225dd7f7d36d9fa332543e24ce04d7ec7044d927998b523fc5" protocol=ttrpc version=3 May 14 00:01:54.044571 systemd[1]: Started cri-containerd-e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af.scope - libcontainer container e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af. May 14 00:01:54.133148 containerd[1485]: time="2025-05-14T00:01:54.133106575Z" level=info msg="StartContainer for \"e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af\" returns successfully" May 14 00:01:54.233227 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 00:01:54.233375 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 00:01:54.541517 containerd[1485]: time="2025-05-14T00:01:54.541479271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af\" id:\"e7664ca469bab66b33bf56c385ad1f0af3138bacb3e833c500d849e136841552\" pid:3746 exit_status:1 exited_at:{seconds:1747180914 nanos:541118950}" May 14 00:01:54.964431 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:50908.service - OpenSSH per-connection server daemon (10.0.0.1:50908). May 14 00:01:55.031660 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 50908 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:01:55.033584 sshd-session[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:55.039322 systemd-logind[1466]: New session 9 of user core. May 14 00:01:55.049874 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:01:55.162681 sshd[3762]: Connection closed by 10.0.0.1 port 50908 May 14 00:01:55.162937 sshd-session[3760]: pam_unix(sshd:session): session closed for user core May 14 00:01:55.166507 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:50908.service: Deactivated successfully. May 14 00:01:55.169427 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:01:55.170163 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. May 14 00:01:55.171111 systemd-logind[1466]: Removed session 9. May 14 00:01:55.464490 containerd[1485]: time="2025-05-14T00:01:55.464440353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af\" id:\"ca61ec6e238782f0f2b4d0f48a2f1c3efd15e5cba7fb42811ffa9076cab8a71c\" pid:3787 exit_status:1 exited_at:{seconds:1747180915 nanos:464142152}" May 14 00:01:55.649763 kernel: bpftool[3913]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 00:01:55.827978 systemd-networkd[1398]: vxlan.calico: Link UP May 14 00:01:55.828086 systemd-networkd[1398]: vxlan.calico: Gained carrier May 14 00:01:57.854802 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL May 14 00:02:00.187125 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:50920.service - OpenSSH per-connection server daemon (10.0.0.1:50920). May 14 00:02:00.246952 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 50920 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:00.249020 sshd-session[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:00.256711 systemd-logind[1466]: New session 10 of user core. May 14 00:02:00.267840 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:02:00.440050 sshd[4010]: Connection closed by 10.0.0.1 port 50920 May 14 00:02:00.440768 sshd-session[4008]: pam_unix(sshd:session): session closed for user core May 14 00:02:00.454117 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:50920.service: Deactivated successfully. May 14 00:02:00.457129 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:02:00.458329 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. May 14 00:02:00.460084 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:50930.service - OpenSSH per-connection server daemon (10.0.0.1:50930). May 14 00:02:00.461046 systemd-logind[1466]: Removed session 10. May 14 00:02:00.510007 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 50930 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:00.511032 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:00.515277 systemd-logind[1466]: New session 11 of user core. May 14 00:02:00.522860 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:02:00.674596 sshd[4027]: Connection closed by 10.0.0.1 port 50930 May 14 00:02:00.674933 sshd-session[4024]: pam_unix(sshd:session): session closed for user core May 14 00:02:00.684558 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:50930.service: Deactivated successfully. May 14 00:02:00.688185 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:02:00.689554 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. May 14 00:02:00.693936 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:50932.service - OpenSSH per-connection server daemon (10.0.0.1:50932). May 14 00:02:00.697509 systemd-logind[1466]: Removed session 11. May 14 00:02:00.747565 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 50932 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:00.748974 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:00.753842 systemd-logind[1466]: New session 12 of user core. May 14 00:02:00.761821 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:02:00.877690 sshd[4041]: Connection closed by 10.0.0.1 port 50932 May 14 00:02:00.877851 sshd-session[4038]: pam_unix(sshd:session): session closed for user core May 14 00:02:00.881257 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:50932.service: Deactivated successfully. May 14 00:02:00.883091 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:02:00.884017 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. May 14 00:02:00.884899 systemd-logind[1466]: Removed session 12. May 14 00:02:02.294707 containerd[1485]: time="2025-05-14T00:02:02.294662237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fl9s,Uid:e599a8f5-76e1-4fe5-8bdd-13714150c55c,Namespace:kube-system,Attempt:0,}" May 14 00:02:02.513009 systemd-networkd[1398]: calicb1b4146f13: Link UP May 14 00:02:02.513293 systemd-networkd[1398]: calicb1b4146f13: Gained carrier May 14 00:02:02.523156 kubelet[2705]: I0514 00:02:02.522475 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ff6hj" podStartSLOduration=9.669698343 podStartE2EDuration="21.522459228s" podCreationTimestamp="2025-05-14 00:01:41 +0000 UTC" firstStartedPulling="2025-05-14 00:01:42.12356498 +0000 UTC m=+21.906874331" lastFinishedPulling="2025-05-14 00:01:53.976325865 +0000 UTC m=+33.759635216" observedRunningTime="2025-05-14 00:01:54.424766552 +0000 UTC m=+34.208075983" watchObservedRunningTime="2025-05-14 00:02:02.522459228 +0000 UTC m=+42.305768579" May 14 00:02:02.527225 containerd[1485]: 2025-05-14 00:02:02.360 [INFO][4066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0 coredns-7db6d8ff4d- kube-system e599a8f5-76e1-4fe5-8bdd-13714150c55c 695 0 2025-05-14 00:01:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-6fl9s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb1b4146f13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fl9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6fl9s-" May 14 00:02:02.527225 containerd[1485]: 2025-05-14 00:02:02.360 [INFO][4066] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fl9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" May 14 00:02:02.527225 containerd[1485]: 2025-05-14 00:02:02.466 [INFO][4080] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" HandleID="k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Workload="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.480 [INFO][4080] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" HandleID="k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Workload="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003837e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-6fl9s", "timestamp":"2025-05-14 00:02:02.466450691 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.480 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.480 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.480 [INFO][4080] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.482 [INFO][4080] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" host="localhost" May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.488 [INFO][4080] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.492 [INFO][4080] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.494 [INFO][4080] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.496 [INFO][4080] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:02:02.529488 containerd[1485]: 2025-05-14 00:02:02.496 [INFO][4080] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" host="localhost" May 14 00:02:02.529861 containerd[1485]: 2025-05-14 00:02:02.497 [INFO][4080] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1 May 14 00:02:02.529861 containerd[1485]: 2025-05-14 00:02:02.501 [INFO][4080] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" host="localhost" May 14 00:02:02.529861 containerd[1485]: 2025-05-14 00:02:02.505 [INFO][4080] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" host="localhost" May 14 00:02:02.529861 containerd[1485]: 2025-05-14 00:02:02.505 [INFO][4080] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" host="localhost" May 14 00:02:02.529861 containerd[1485]: 2025-05-14 00:02:02.506 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:02:02.529861 containerd[1485]: 2025-05-14 00:02:02.506 [INFO][4080] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" HandleID="k8s-pod-network.12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Workload="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" May 14 00:02:02.529981 containerd[1485]: 2025-05-14 00:02:02.508 [INFO][4066] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fl9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e599a8f5-76e1-4fe5-8bdd-13714150c55c", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-6fl9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb1b4146f13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:02.530041 containerd[1485]: 2025-05-14 00:02:02.508 [INFO][4066] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fl9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" May 14 00:02:02.530041 containerd[1485]: 2025-05-14 00:02:02.508 [INFO][4066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb1b4146f13 ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fl9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" May 14 00:02:02.530041 containerd[1485]: 2025-05-14 00:02:02.513 [INFO][4066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fl9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" May 14 00:02:02.530114 containerd[1485]: 2025-05-14 00:02:02.513 [INFO][4066] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fl9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e599a8f5-76e1-4fe5-8bdd-13714150c55c", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1", Pod:"coredns-7db6d8ff4d-6fl9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb1b4146f13", MAC:"66:fc:ff:74:b0:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:02.530114 containerd[1485]: 2025-05-14 00:02:02.523 [INFO][4066] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fl9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6fl9s-eth0" May 14 00:02:02.574786 containerd[1485]: time="2025-05-14T00:02:02.573999081Z" level=info msg="connecting to shim 12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1" address="unix:///run/containerd/s/eb9800b0314ad4dd292e222578dc930c0093ed22c894c397646df007249d798f" namespace=k8s.io protocol=ttrpc version=3 May 14 00:02:02.603847 systemd[1]: Started cri-containerd-12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1.scope - libcontainer container 12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1. May 14 00:02:02.615678 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:02:02.635746 containerd[1485]: time="2025-05-14T00:02:02.635697423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fl9s,Uid:e599a8f5-76e1-4fe5-8bdd-13714150c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1\"" May 14 00:02:02.639160 containerd[1485]: time="2025-05-14T00:02:02.639119147Z" level=info msg="CreateContainer within sandbox \"12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:02:02.651445 containerd[1485]: time="2025-05-14T00:02:02.651398239Z" level=info msg="Container 37e5185e1aa6752a0ce7d60b18cf937272a286fc9ffefdcbad5443d2f653e16a: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:02.654362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953526488.mount: Deactivated successfully. May 14 00:02:02.656628 containerd[1485]: time="2025-05-14T00:02:02.656584245Z" level=info msg="CreateContainer within sandbox \"12e5285ee2961d86127fdc7ee5fee157ea0aa304d7cbf9af27c56a8bf91493f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37e5185e1aa6752a0ce7d60b18cf937272a286fc9ffefdcbad5443d2f653e16a\"" May 14 00:02:02.657502 containerd[1485]: time="2025-05-14T00:02:02.657473766Z" level=info msg="StartContainer for \"37e5185e1aa6752a0ce7d60b18cf937272a286fc9ffefdcbad5443d2f653e16a\"" May 14 00:02:02.658293 containerd[1485]: time="2025-05-14T00:02:02.658260446Z" level=info msg="connecting to shim 37e5185e1aa6752a0ce7d60b18cf937272a286fc9ffefdcbad5443d2f653e16a" address="unix:///run/containerd/s/eb9800b0314ad4dd292e222578dc930c0093ed22c894c397646df007249d798f" protocol=ttrpc version=3 May 14 00:02:02.677819 systemd[1]: Started cri-containerd-37e5185e1aa6752a0ce7d60b18cf937272a286fc9ffefdcbad5443d2f653e16a.scope - libcontainer container 37e5185e1aa6752a0ce7d60b18cf937272a286fc9ffefdcbad5443d2f653e16a. May 14 00:02:02.710734 containerd[1485]: time="2025-05-14T00:02:02.710689620Z" level=info msg="StartContainer for \"37e5185e1aa6752a0ce7d60b18cf937272a286fc9ffefdcbad5443d2f653e16a\" returns successfully" May 14 00:02:03.443167 kubelet[2705]: I0514 00:02:03.443091 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6fl9s" podStartSLOduration=27.443073816 podStartE2EDuration="27.443073816s" podCreationTimestamp="2025-05-14 00:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:02:03.442425095 +0000 UTC m=+43.225734446" watchObservedRunningTime="2025-05-14 00:02:03.443073816 +0000 UTC m=+43.226383127" May 14 00:02:04.293998 containerd[1485]: time="2025-05-14T00:02:04.293938529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ddcf4fbf9-7nlv4,Uid:8f4760a2-09e7-43d4-a406-e2aab18fa5e1,Namespace:calico-system,Attempt:0,}" May 14 00:02:04.294459 containerd[1485]: time="2025-05-14T00:02:04.294417769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd56b8668-7ppmc,Uid:e6f31dde-a3a0-4b1f-90aa-0add754aeab0,Namespace:calico-apiserver,Attempt:0,}" May 14 00:02:04.319773 systemd-networkd[1398]: calicb1b4146f13: Gained IPv6LL May 14 00:02:04.416756 systemd-networkd[1398]: cali695c0f754a6: Link UP May 14 00:02:04.417203 systemd-networkd[1398]: cali695c0f754a6: Gained carrier May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.342 [INFO][4201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0 calico-apiserver-bd56b8668- calico-apiserver e6f31dde-a3a0-4b1f-90aa-0add754aeab0 685 0 2025-05-14 00:01:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bd56b8668 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bd56b8668-7ppmc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali695c0f754a6 [] []}} ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-7ppmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.342 [INFO][4201] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-7ppmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.367 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" HandleID="k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Workload="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.384 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" HandleID="k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Workload="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aafd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bd56b8668-7ppmc", "timestamp":"2025-05-14 00:02:04.367836795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.384 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.384 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.384 [INFO][4221] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.386 [INFO][4221] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.390 [INFO][4221] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.394 [INFO][4221] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.395 [INFO][4221] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.399 [INFO][4221] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.399 [INFO][4221] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.401 [INFO][4221] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122 May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.404 [INFO][4221] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.410 [INFO][4221] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.410 [INFO][4221] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" host="localhost" May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.410 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:02:04.436312 containerd[1485]: 2025-05-14 00:02:04.410 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" HandleID="k8s-pod-network.42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Workload="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" May 14 00:02:04.436832 containerd[1485]: 2025-05-14 00:02:04.412 [INFO][4201] cni-plugin/k8s.go 386: Populated endpoint ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-7ppmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0", GenerateName:"calico-apiserver-bd56b8668-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6f31dde-a3a0-4b1f-90aa-0add754aeab0", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd56b8668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bd56b8668-7ppmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali695c0f754a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:04.436832 containerd[1485]: 2025-05-14 00:02:04.412 [INFO][4201] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-7ppmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" May 14 00:02:04.436832 containerd[1485]: 2025-05-14 00:02:04.412 [INFO][4201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali695c0f754a6 ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-7ppmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" May 14 00:02:04.436832 containerd[1485]: 2025-05-14 00:02:04.417 [INFO][4201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-7ppmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" May 14 00:02:04.436832 containerd[1485]: 2025-05-14 00:02:04.418 [INFO][4201] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-7ppmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0", GenerateName:"calico-apiserver-bd56b8668-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6f31dde-a3a0-4b1f-90aa-0add754aeab0", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd56b8668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122", Pod:"calico-apiserver-bd56b8668-7ppmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali695c0f754a6", MAC:"06:69:03:4a:39:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:04.436832 containerd[1485]: 2025-05-14 00:02:04.430 [INFO][4201] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-7ppmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--7ppmc-eth0" May 14 00:02:04.449320 systemd-networkd[1398]: calic4936e1545b: Link UP May 14 00:02:04.450418 systemd-networkd[1398]: calic4936e1545b: Gained carrier May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.339 [INFO][4190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0 calico-kube-controllers-7ddcf4fbf9- calico-system 8f4760a2-09e7-43d4-a406-e2aab18fa5e1 688 0 2025-05-14 00:01:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7ddcf4fbf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7ddcf4fbf9-7nlv4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic4936e1545b [] []}} ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Namespace="calico-system" Pod="calico-kube-controllers-7ddcf4fbf9-7nlv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.339 [INFO][4190] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Namespace="calico-system" Pod="calico-kube-controllers-7ddcf4fbf9-7nlv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.374 [INFO][4219] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" HandleID="k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Workload="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.387 [INFO][4219] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" HandleID="k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Workload="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004356c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7ddcf4fbf9-7nlv4", "timestamp":"2025-05-14 00:02:04.374613881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.387 [INFO][4219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.410 [INFO][4219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.410 [INFO][4219] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.411 [INFO][4219] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.417 [INFO][4219] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.424 [INFO][4219] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.426 [INFO][4219] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.429 [INFO][4219] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.429 [INFO][4219] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.431 [INFO][4219] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.435 [INFO][4219] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.441 [INFO][4219] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.441 [INFO][4219] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" host="localhost" May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.441 [INFO][4219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:02:04.465786 containerd[1485]: 2025-05-14 00:02:04.441 [INFO][4219] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" HandleID="k8s-pod-network.2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Workload="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" May 14 00:02:04.466466 containerd[1485]: 2025-05-14 00:02:04.443 [INFO][4190] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Namespace="calico-system" Pod="calico-kube-controllers-7ddcf4fbf9-7nlv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0", GenerateName:"calico-kube-controllers-7ddcf4fbf9-", Namespace:"calico-system", SelfLink:"", UID:"8f4760a2-09e7-43d4-a406-e2aab18fa5e1", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ddcf4fbf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7ddcf4fbf9-7nlv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4936e1545b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:04.466466 containerd[1485]: 2025-05-14 00:02:04.444 [INFO][4190] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Namespace="calico-system" Pod="calico-kube-controllers-7ddcf4fbf9-7nlv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" May 14 00:02:04.466466 containerd[1485]: 2025-05-14 00:02:04.444 [INFO][4190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4936e1545b ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Namespace="calico-system" Pod="calico-kube-controllers-7ddcf4fbf9-7nlv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" May 14 00:02:04.466466 containerd[1485]: 2025-05-14 00:02:04.449 [INFO][4190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Namespace="calico-system" Pod="calico-kube-controllers-7ddcf4fbf9-7nlv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" May 14 00:02:04.466466 containerd[1485]: 2025-05-14 00:02:04.449 [INFO][4190] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Namespace="calico-system" Pod="calico-kube-controllers-7ddcf4fbf9-7nlv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0", GenerateName:"calico-kube-controllers-7ddcf4fbf9-", Namespace:"calico-system", SelfLink:"", UID:"8f4760a2-09e7-43d4-a406-e2aab18fa5e1", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ddcf4fbf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c", Pod:"calico-kube-controllers-7ddcf4fbf9-7nlv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4936e1545b", MAC:"82:36:2e:04:83:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:04.466466 containerd[1485]: 2025-05-14 00:02:04.458 [INFO][4190] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" Namespace="calico-system" Pod="calico-kube-controllers-7ddcf4fbf9-7nlv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcf4fbf9--7nlv4-eth0" May 14 00:02:04.473011 containerd[1485]: time="2025-05-14T00:02:04.472458008Z" level=info msg="connecting to shim 42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122" address="unix:///run/containerd/s/413ce696f22ba88c8e11b0f82a13fe381f3cfde0c6d9aeda75d49690885accb3" namespace=k8s.io protocol=ttrpc version=3 May 14 00:02:04.494516 containerd[1485]: time="2025-05-14T00:02:04.494239588Z" level=info msg="connecting to shim 2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c" address="unix:///run/containerd/s/2c1d82d18cf4e3e740833460744f1a9daa8177b741f9d03a0dec9d2995a9ec1a" namespace=k8s.io protocol=ttrpc version=3 May 14 00:02:04.497816 systemd[1]: Started cri-containerd-42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122.scope - libcontainer container 42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122. May 14 00:02:04.512106 systemd[1]: Started cri-containerd-2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c.scope - libcontainer container 2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c. May 14 00:02:04.526591 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:02:04.528159 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:02:04.552226 containerd[1485]: time="2025-05-14T00:02:04.552021759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ddcf4fbf9-7nlv4,Uid:8f4760a2-09e7-43d4-a406-e2aab18fa5e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c\"" May 14 00:02:04.554137 containerd[1485]: time="2025-05-14T00:02:04.553865041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 00:02:04.558621 containerd[1485]: time="2025-05-14T00:02:04.558587085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd56b8668-7ppmc,Uid:e6f31dde-a3a0-4b1f-90aa-0add754aeab0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122\"" May 14 00:02:05.296240 containerd[1485]: time="2025-05-14T00:02:05.295954647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srbbq,Uid:1ba7051e-a376-42a7-9404-d780e62f7c49,Namespace:kube-system,Attempt:0,}" May 14 00:02:05.296611 containerd[1485]: time="2025-05-14T00:02:05.296448408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd56b8668-528js,Uid:4542482e-1851-4643-8bf4-c7f756cc0345,Namespace:calico-apiserver,Attempt:0,}" May 14 00:02:05.297294 containerd[1485]: time="2025-05-14T00:02:05.297247489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9hj6,Uid:c8ed3b65-eee6-481b-bcd9-c2f7489b7d71,Namespace:calico-system,Attempt:0,}" May 14 00:02:05.472874 systemd-networkd[1398]: cali37a349451eb: Link UP May 14 00:02:05.473247 systemd-networkd[1398]: cali37a349451eb: Gained carrier May 14 00:02:05.473547 systemd-networkd[1398]: cali695c0f754a6: Gained IPv6LL May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.360 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0 coredns-7db6d8ff4d- kube-system 1ba7051e-a376-42a7-9404-d780e62f7c49 690 0 2025-05-14 00:01:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-srbbq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali37a349451eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srbbq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srbbq-" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.360 [INFO][4355] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srbbq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.410 [INFO][4401] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" HandleID="k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Workload="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.423 [INFO][4401] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" HandleID="k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Workload="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030ad70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-srbbq", "timestamp":"2025-05-14 00:02:05.410026583 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.423 [INFO][4401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.423 [INFO][4401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.423 [INFO][4401] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.425 [INFO][4401] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.438 [INFO][4401] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.443 [INFO][4401] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.444 [INFO][4401] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.447 [INFO][4401] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.447 [INFO][4401] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.448 [INFO][4401] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028 May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.454 [INFO][4401] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.466 [INFO][4401] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.466 [INFO][4401] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" host="localhost" May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.466 [INFO][4401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:02:05.487252 containerd[1485]: 2025-05-14 00:02:05.466 [INFO][4401] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" HandleID="k8s-pod-network.7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Workload="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" May 14 00:02:05.487847 containerd[1485]: 2025-05-14 00:02:05.468 [INFO][4355] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srbbq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1ba7051e-a376-42a7-9404-d780e62f7c49", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-srbbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali37a349451eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:05.487847 containerd[1485]: 2025-05-14 00:02:05.469 [INFO][4355] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srbbq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" May 14 00:02:05.487847 containerd[1485]: 2025-05-14 00:02:05.469 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37a349451eb ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srbbq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" May 14 00:02:05.487847 containerd[1485]: 2025-05-14 00:02:05.471 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srbbq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" May 14 00:02:05.487847 containerd[1485]: 2025-05-14 00:02:05.472 [INFO][4355] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srbbq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1ba7051e-a376-42a7-9404-d780e62f7c49", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028", Pod:"coredns-7db6d8ff4d-srbbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali37a349451eb", MAC:"a2:ff:db:b8:c1:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:05.487847 containerd[1485]: 2025-05-14 00:02:05.484 [INFO][4355] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srbbq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srbbq-eth0" May 14 00:02:05.516282 systemd-networkd[1398]: calibf87b636bab: Link UP May 14 00:02:05.517357 systemd-networkd[1398]: calibf87b636bab: Gained carrier May 14 00:02:05.523915 containerd[1485]: time="2025-05-14T00:02:05.523874238Z" level=info msg="connecting to shim 7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028" address="unix:///run/containerd/s/36f198df3c712907d3288929210b499cb33ee844610f98874fbb77f00a361e5c" namespace=k8s.io protocol=ttrpc version=3 May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.364 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bd56b8668--528js-eth0 calico-apiserver-bd56b8668- calico-apiserver 4542482e-1851-4643-8bf4-c7f756cc0345 687 0 2025-05-14 00:01:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bd56b8668 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bd56b8668-528js eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf87b636bab [] []}} ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-528js" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--528js-" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.364 [INFO][4361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-528js" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.413 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" HandleID="k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Workload="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.426 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" HandleID="k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Workload="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000428420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bd56b8668-528js", "timestamp":"2025-05-14 00:02:05.413518426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.427 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.466 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.466 [INFO][4408] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.469 [INFO][4408] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.477 [INFO][4408] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.485 [INFO][4408] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.488 [INFO][4408] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.493 [INFO][4408] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.493 [INFO][4408] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.496 [INFO][4408] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45 May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.501 [INFO][4408] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.509 [INFO][4408] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.509 [INFO][4408] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" host="localhost" May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.509 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:02:05.538664 containerd[1485]: 2025-05-14 00:02:05.509 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" HandleID="k8s-pod-network.94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Workload="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" May 14 00:02:05.539198 containerd[1485]: 2025-05-14 00:02:05.514 [INFO][4361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-528js" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd56b8668--528js-eth0", GenerateName:"calico-apiserver-bd56b8668-", Namespace:"calico-apiserver", SelfLink:"", UID:"4542482e-1851-4643-8bf4-c7f756cc0345", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd56b8668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bd56b8668-528js", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf87b636bab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:05.539198 containerd[1485]: 2025-05-14 00:02:05.514 [INFO][4361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-528js" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" May 14 00:02:05.539198 containerd[1485]: 2025-05-14 00:02:05.514 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf87b636bab ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-528js" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" May 14 00:02:05.539198 containerd[1485]: 2025-05-14 00:02:05.517 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-528js" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" May 14 00:02:05.539198 containerd[1485]: 2025-05-14 00:02:05.518 [INFO][4361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-528js" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd56b8668--528js-eth0", GenerateName:"calico-apiserver-bd56b8668-", Namespace:"calico-apiserver", SelfLink:"", UID:"4542482e-1851-4643-8bf4-c7f756cc0345", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd56b8668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45", Pod:"calico-apiserver-bd56b8668-528js", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf87b636bab", MAC:"8e:c4:40:cb:46:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:05.539198 containerd[1485]: 2025-05-14 00:02:05.529 [INFO][4361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" Namespace="calico-apiserver" Pod="calico-apiserver-bd56b8668-528js" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd56b8668--528js-eth0" May 14 00:02:05.565969 systemd[1]: Started cri-containerd-7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028.scope - libcontainer container 7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028. May 14 00:02:05.573281 systemd-networkd[1398]: calife1facf69c1: Link UP May 14 00:02:05.573980 systemd-networkd[1398]: calife1facf69c1: Gained carrier May 14 00:02:05.589558 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.393 [INFO][4384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c9hj6-eth0 csi-node-driver- calico-system c8ed3b65-eee6-481b-bcd9-c2f7489b7d71 593 0 2025-05-14 00:01:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c9hj6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calife1facf69c1 [] []}} ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Namespace="calico-system" Pod="csi-node-driver-c9hj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9hj6-" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.393 [INFO][4384] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Namespace="calico-system" Pod="csi-node-driver-c9hj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9hj6-eth0" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.429 [INFO][4416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" HandleID="k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Workload="localhost-k8s-csi--node--driver--c9hj6-eth0" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.442 [INFO][4416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" HandleID="k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Workload="localhost-k8s-csi--node--driver--c9hj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c9hj6", "timestamp":"2025-05-14 00:02:05.429029959 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.443 [INFO][4416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.509 [INFO][4416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.509 [INFO][4416] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.512 [INFO][4416] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.522 [INFO][4416] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.542 [INFO][4416] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.545 [INFO][4416] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.547 [INFO][4416] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.547 [INFO][4416] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.550 [INFO][4416] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.555 [INFO][4416] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.564 [INFO][4416] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.564 [INFO][4416] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" host="localhost" May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.564 [INFO][4416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:02:05.597384 containerd[1485]: 2025-05-14 00:02:05.564 [INFO][4416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" HandleID="k8s-pod-network.76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Workload="localhost-k8s-csi--node--driver--c9hj6-eth0" May 14 00:02:05.597903 containerd[1485]: 2025-05-14 00:02:05.569 [INFO][4384] cni-plugin/k8s.go 386: Populated endpoint ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Namespace="calico-system" Pod="csi-node-driver-c9hj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9hj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c9hj6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c8ed3b65-eee6-481b-bcd9-c2f7489b7d71", ResourceVersion:"593", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c9hj6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife1facf69c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:05.597903 containerd[1485]: 2025-05-14 00:02:05.569 [INFO][4384] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Namespace="calico-system" Pod="csi-node-driver-c9hj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9hj6-eth0" May 14 00:02:05.597903 containerd[1485]: 2025-05-14 00:02:05.569 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife1facf69c1 ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Namespace="calico-system" Pod="csi-node-driver-c9hj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9hj6-eth0" May 14 00:02:05.597903 containerd[1485]: 2025-05-14 00:02:05.574 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Namespace="calico-system" Pod="csi-node-driver-c9hj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9hj6-eth0" May 14 00:02:05.597903 containerd[1485]: 2025-05-14 00:02:05.576 [INFO][4384] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Namespace="calico-system" Pod="csi-node-driver-c9hj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9hj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c9hj6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c8ed3b65-eee6-481b-bcd9-c2f7489b7d71", ResourceVersion:"593", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d", Pod:"csi-node-driver-c9hj6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife1facf69c1", MAC:"4a:b8:bc:3f:0d:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:02:05.597903 containerd[1485]: 2025-05-14 00:02:05.591 [INFO][4384] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" Namespace="calico-system" Pod="csi-node-driver-c9hj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9hj6-eth0" May 14 00:02:05.607969 containerd[1485]: time="2025-05-14T00:02:05.607898029Z" level=info msg="connecting to shim 94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45" address="unix:///run/containerd/s/bec24b118ec2e991fb8b4ab733df3c35c31a7dfd8e2b2ff4d9554b732bdfff37" namespace=k8s.io protocol=ttrpc version=3 May 14 00:02:05.627540 containerd[1485]: time="2025-05-14T00:02:05.627466525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srbbq,Uid:1ba7051e-a376-42a7-9404-d780e62f7c49,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028\"" May 14 00:02:05.630634 containerd[1485]: time="2025-05-14T00:02:05.630586128Z" level=info msg="CreateContainer within sandbox \"7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:02:05.640058 containerd[1485]: time="2025-05-14T00:02:05.639984576Z" level=info msg="connecting to shim 76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d" address="unix:///run/containerd/s/0fe53d140b4d7e2e80be73c7fb3029d3f2eaf62068e67051382e942b51498803" namespace=k8s.io protocol=ttrpc version=3 May 14 00:02:05.645835 systemd[1]: Started cri-containerd-94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45.scope - libcontainer container 94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45. May 14 00:02:05.649044 containerd[1485]: time="2025-05-14T00:02:05.649010663Z" level=info msg="Container 75275438c24e7d85b0611306c58fee4587dca8e5e3c1b537aee663b56c599d48: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:05.669329 containerd[1485]: time="2025-05-14T00:02:05.669293440Z" level=info msg="CreateContainer within sandbox \"7b2e6ecc8f41eca4c75bf1ec2989ed3fd948cf3f98502460968491ad563be028\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75275438c24e7d85b0611306c58fee4587dca8e5e3c1b537aee663b56c599d48\"" May 14 00:02:05.670057 containerd[1485]: time="2025-05-14T00:02:05.670014721Z" level=info msg="StartContainer for \"75275438c24e7d85b0611306c58fee4587dca8e5e3c1b537aee663b56c599d48\"" May 14 00:02:05.671959 containerd[1485]: time="2025-05-14T00:02:05.671859202Z" level=info msg="connecting to shim 75275438c24e7d85b0611306c58fee4587dca8e5e3c1b537aee663b56c599d48" address="unix:///run/containerd/s/36f198df3c712907d3288929210b499cb33ee844610f98874fbb77f00a361e5c" protocol=ttrpc version=3 May 14 00:02:05.673617 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:02:05.675925 systemd[1]: Started cri-containerd-76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d.scope - libcontainer container 76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d. May 14 00:02:05.703718 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:02:05.707810 systemd[1]: Started cri-containerd-75275438c24e7d85b0611306c58fee4587dca8e5e3c1b537aee663b56c599d48.scope - libcontainer container 75275438c24e7d85b0611306c58fee4587dca8e5e3c1b537aee663b56c599d48. May 14 00:02:05.716961 containerd[1485]: time="2025-05-14T00:02:05.716919680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd56b8668-528js,Uid:4542482e-1851-4643-8bf4-c7f756cc0345,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45\"" May 14 00:02:05.732479 containerd[1485]: time="2025-05-14T00:02:05.732373133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9hj6,Uid:c8ed3b65-eee6-481b-bcd9-c2f7489b7d71,Namespace:calico-system,Attempt:0,} returns sandbox id \"76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d\"" May 14 00:02:05.744544 containerd[1485]: time="2025-05-14T00:02:05.744429303Z" level=info msg="StartContainer for \"75275438c24e7d85b0611306c58fee4587dca8e5e3c1b537aee663b56c599d48\" returns successfully" May 14 00:02:05.893341 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:53760.service - OpenSSH per-connection server daemon (10.0.0.1:53760). May 14 00:02:05.960213 sshd[4646]: Accepted publickey for core from 10.0.0.1 port 53760 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:05.962285 sshd-session[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:05.967025 systemd-logind[1466]: New session 13 of user core. May 14 00:02:05.973844 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:02:06.127549 containerd[1485]: time="2025-05-14T00:02:06.127491377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:06.128222 containerd[1485]: time="2025-05-14T00:02:06.128173658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 00:02:06.129892 containerd[1485]: time="2025-05-14T00:02:06.129857219Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:06.133204 containerd[1485]: time="2025-05-14T00:02:06.133162782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.579262061s" May 14 00:02:06.133204 containerd[1485]: time="2025-05-14T00:02:06.133201302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 00:02:06.133402 containerd[1485]: time="2025-05-14T00:02:06.133366342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:06.135375 containerd[1485]: time="2025-05-14T00:02:06.135340143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:02:06.142315 containerd[1485]: time="2025-05-14T00:02:06.142276109Z" level=info msg="CreateContainer within sandbox \"2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 00:02:06.149317 containerd[1485]: time="2025-05-14T00:02:06.149183194Z" level=info msg="Container 05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:06.160746 containerd[1485]: time="2025-05-14T00:02:06.160531923Z" level=info msg="CreateContainer within sandbox \"2d7d3b03ee87d67d77f700aeda442b54eae96486e7414c68e9afc9ed483cb84c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c\"" May 14 00:02:06.162614 containerd[1485]: time="2025-05-14T00:02:06.162580125Z" level=info msg="StartContainer for \"05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c\"" May 14 00:02:06.164868 containerd[1485]: time="2025-05-14T00:02:06.164830566Z" level=info msg="connecting to shim 05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c" address="unix:///run/containerd/s/2c1d82d18cf4e3e740833460744f1a9daa8177b741f9d03a0dec9d2995a9ec1a" protocol=ttrpc version=3 May 14 00:02:06.176203 systemd-networkd[1398]: calic4936e1545b: Gained IPv6LL May 14 00:02:06.179576 sshd[4648]: Connection closed by 10.0.0.1 port 53760 May 14 00:02:06.181569 sshd-session[4646]: pam_unix(sshd:session): session closed for user core May 14 00:02:06.186074 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. May 14 00:02:06.186321 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:53760.service: Deactivated successfully. May 14 00:02:06.188783 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:02:06.191487 systemd-logind[1466]: Removed session 13. May 14 00:02:06.199829 systemd[1]: Started cri-containerd-05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c.scope - libcontainer container 05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c. May 14 00:02:06.239852 containerd[1485]: time="2025-05-14T00:02:06.239344945Z" level=info msg="StartContainer for \"05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c\" returns successfully" May 14 00:02:06.484316 kubelet[2705]: I0514 00:02:06.484242 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7ddcf4fbf9-7nlv4" podStartSLOduration=23.903179316 podStartE2EDuration="25.484221857s" podCreationTimestamp="2025-05-14 00:01:41 +0000 UTC" firstStartedPulling="2025-05-14 00:02:04.553357681 +0000 UTC m=+44.336667032" lastFinishedPulling="2025-05-14 00:02:06.134400222 +0000 UTC m=+45.917709573" observedRunningTime="2025-05-14 00:02:06.471549527 +0000 UTC m=+46.254858878" watchObservedRunningTime="2025-05-14 00:02:06.484221857 +0000 UTC m=+46.267531248" May 14 00:02:06.494160 containerd[1485]: time="2025-05-14T00:02:06.494122865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c\" id:\"c5fa5300d18035d4aad1723747d4895bc1209851af3922b28669d49fdabd3831\" pid:4713 exited_at:{seconds:1747180926 nanos:493791145}" May 14 00:02:06.508811 kubelet[2705]: I0514 00:02:06.508755 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-srbbq" podStartSLOduration=30.508738996 podStartE2EDuration="30.508738996s" podCreationTimestamp="2025-05-14 00:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:02:06.484399537 +0000 UTC m=+46.267708928" watchObservedRunningTime="2025-05-14 00:02:06.508738996 +0000 UTC m=+46.292048347" May 14 00:02:07.007812 systemd-networkd[1398]: calife1facf69c1: Gained IPv6LL May 14 00:02:07.070800 systemd-networkd[1398]: calibf87b636bab: Gained IPv6LL May 14 00:02:07.455212 systemd-networkd[1398]: cali37a349451eb: Gained IPv6LL May 14 00:02:07.576254 containerd[1485]: time="2025-05-14T00:02:07.576188726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:07.577701 containerd[1485]: time="2025-05-14T00:02:07.577636607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 00:02:07.579147 containerd[1485]: time="2025-05-14T00:02:07.578927808Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:07.581146 containerd[1485]: time="2025-05-14T00:02:07.580900609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:07.581694 containerd[1485]: time="2025-05-14T00:02:07.581547330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.446167307s" May 14 00:02:07.581694 containerd[1485]: time="2025-05-14T00:02:07.581584890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 00:02:07.582731 containerd[1485]: time="2025-05-14T00:02:07.582629851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:02:07.583730 containerd[1485]: time="2025-05-14T00:02:07.583598571Z" level=info msg="CreateContainer within sandbox \"42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:02:07.592272 containerd[1485]: time="2025-05-14T00:02:07.592212418Z" level=info msg="Container d1a19f360e8fcf7cc502d6e09af3c5afa0ebeedd3fccf6bb6b6e4ffaec3f959d: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:07.597915 containerd[1485]: time="2025-05-14T00:02:07.597869662Z" level=info msg="CreateContainer within sandbox \"42babf1ccdf86295c0ddd5b82d89b3a8c3aa9be71536429e86d2c199e4a32122\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1a19f360e8fcf7cc502d6e09af3c5afa0ebeedd3fccf6bb6b6e4ffaec3f959d\"" May 14 00:02:07.598530 containerd[1485]: time="2025-05-14T00:02:07.598407822Z" level=info msg="StartContainer for \"d1a19f360e8fcf7cc502d6e09af3c5afa0ebeedd3fccf6bb6b6e4ffaec3f959d\"" May 14 00:02:07.599678 containerd[1485]: time="2025-05-14T00:02:07.599623303Z" level=info msg="connecting to shim d1a19f360e8fcf7cc502d6e09af3c5afa0ebeedd3fccf6bb6b6e4ffaec3f959d" address="unix:///run/containerd/s/413ce696f22ba88c8e11b0f82a13fe381f3cfde0c6d9aeda75d49690885accb3" protocol=ttrpc version=3 May 14 00:02:07.620858 systemd[1]: Started cri-containerd-d1a19f360e8fcf7cc502d6e09af3c5afa0ebeedd3fccf6bb6b6e4ffaec3f959d.scope - libcontainer container d1a19f360e8fcf7cc502d6e09af3c5afa0ebeedd3fccf6bb6b6e4ffaec3f959d. May 14 00:02:07.656344 containerd[1485]: time="2025-05-14T00:02:07.656306105Z" level=info msg="StartContainer for \"d1a19f360e8fcf7cc502d6e09af3c5afa0ebeedd3fccf6bb6b6e4ffaec3f959d\" returns successfully" May 14 00:02:07.850250 containerd[1485]: time="2025-05-14T00:02:07.850109488Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:07.851178 containerd[1485]: time="2025-05-14T00:02:07.850992808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 00:02:07.853261 containerd[1485]: time="2025-05-14T00:02:07.853094890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 270.353559ms" May 14 00:02:07.853261 containerd[1485]: time="2025-05-14T00:02:07.853141410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 00:02:07.854511 containerd[1485]: time="2025-05-14T00:02:07.854278811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 00:02:07.856307 containerd[1485]: time="2025-05-14T00:02:07.856261612Z" level=info msg="CreateContainer within sandbox \"94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:02:07.863217 containerd[1485]: time="2025-05-14T00:02:07.863160417Z" level=info msg="Container 1f674bc871a767f60c5e954269df49d60c503fcab29cc2d428271e15b5550956: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:07.876251 containerd[1485]: time="2025-05-14T00:02:07.876203787Z" level=info msg="CreateContainer within sandbox \"94a17a7d50d2e26bac91d5f260dec369f7cf1efbb8dcd9a157afb94398055b45\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1f674bc871a767f60c5e954269df49d60c503fcab29cc2d428271e15b5550956\"" May 14 00:02:07.878556 containerd[1485]: time="2025-05-14T00:02:07.878500068Z" level=info msg="StartContainer for \"1f674bc871a767f60c5e954269df49d60c503fcab29cc2d428271e15b5550956\"" May 14 00:02:07.885787 containerd[1485]: time="2025-05-14T00:02:07.885344394Z" level=info msg="connecting to shim 1f674bc871a767f60c5e954269df49d60c503fcab29cc2d428271e15b5550956" address="unix:///run/containerd/s/bec24b118ec2e991fb8b4ab733df3c35c31a7dfd8e2b2ff4d9554b732bdfff37" protocol=ttrpc version=3 May 14 00:02:07.907815 systemd[1]: Started cri-containerd-1f674bc871a767f60c5e954269df49d60c503fcab29cc2d428271e15b5550956.scope - libcontainer container 1f674bc871a767f60c5e954269df49d60c503fcab29cc2d428271e15b5550956. May 14 00:02:07.948665 containerd[1485]: time="2025-05-14T00:02:07.948610320Z" level=info msg="StartContainer for \"1f674bc871a767f60c5e954269df49d60c503fcab29cc2d428271e15b5550956\" returns successfully" May 14 00:02:08.475067 kubelet[2705]: I0514 00:02:08.473774 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bd56b8668-528js" podStartSLOduration=25.337978355 podStartE2EDuration="27.473756445s" podCreationTimestamp="2025-05-14 00:01:41 +0000 UTC" firstStartedPulling="2025-05-14 00:02:05.718347401 +0000 UTC m=+45.501656752" lastFinishedPulling="2025-05-14 00:02:07.854125491 +0000 UTC m=+47.637434842" observedRunningTime="2025-05-14 00:02:08.473285764 +0000 UTC m=+48.256595155" watchObservedRunningTime="2025-05-14 00:02:08.473756445 +0000 UTC m=+48.257065796" May 14 00:02:08.487994 kubelet[2705]: I0514 00:02:08.487809 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bd56b8668-7ppmc" podStartSLOduration=24.469376093 podStartE2EDuration="27.487789254s" podCreationTimestamp="2025-05-14 00:01:41 +0000 UTC" firstStartedPulling="2025-05-14 00:02:04.56398557 +0000 UTC m=+44.347294921" lastFinishedPulling="2025-05-14 00:02:07.582398731 +0000 UTC m=+47.365708082" observedRunningTime="2025-05-14 00:02:08.485531373 +0000 UTC m=+48.268840724" watchObservedRunningTime="2025-05-14 00:02:08.487789254 +0000 UTC m=+48.271098605" May 14 00:02:08.734558 containerd[1485]: time="2025-05-14T00:02:08.734419225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:08.735928 containerd[1485]: time="2025-05-14T00:02:08.735872146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 00:02:08.736747 containerd[1485]: time="2025-05-14T00:02:08.736716386Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:08.739317 containerd[1485]: time="2025-05-14T00:02:08.739223148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:08.739883 containerd[1485]: time="2025-05-14T00:02:08.739722948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 885.410537ms" May 14 00:02:08.739883 containerd[1485]: time="2025-05-14T00:02:08.739758428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 00:02:08.746824 containerd[1485]: time="2025-05-14T00:02:08.746784473Z" level=info msg="CreateContainer within sandbox \"76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 00:02:08.765579 containerd[1485]: time="2025-05-14T00:02:08.765532326Z" level=info msg="Container 40a1eb0a13f8cecc9f9e2096ed456927ed8eb61241d3b61ff37ca81061304d36: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:08.798767 containerd[1485]: time="2025-05-14T00:02:08.798717749Z" level=info msg="CreateContainer within sandbox \"76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"40a1eb0a13f8cecc9f9e2096ed456927ed8eb61241d3b61ff37ca81061304d36\"" May 14 00:02:08.799415 containerd[1485]: time="2025-05-14T00:02:08.799363429Z" level=info msg="StartContainer for \"40a1eb0a13f8cecc9f9e2096ed456927ed8eb61241d3b61ff37ca81061304d36\"" May 14 00:02:08.800907 containerd[1485]: time="2025-05-14T00:02:08.800867590Z" level=info msg="connecting to shim 40a1eb0a13f8cecc9f9e2096ed456927ed8eb61241d3b61ff37ca81061304d36" address="unix:///run/containerd/s/0fe53d140b4d7e2e80be73c7fb3029d3f2eaf62068e67051382e942b51498803" protocol=ttrpc version=3 May 14 00:02:08.824848 systemd[1]: Started cri-containerd-40a1eb0a13f8cecc9f9e2096ed456927ed8eb61241d3b61ff37ca81061304d36.scope - libcontainer container 40a1eb0a13f8cecc9f9e2096ed456927ed8eb61241d3b61ff37ca81061304d36. May 14 00:02:08.872040 containerd[1485]: time="2025-05-14T00:02:08.871330639Z" level=info msg="StartContainer for \"40a1eb0a13f8cecc9f9e2096ed456927ed8eb61241d3b61ff37ca81061304d36\" returns successfully" May 14 00:02:08.873863 containerd[1485]: time="2025-05-14T00:02:08.873785641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 00:02:09.470291 kubelet[2705]: I0514 00:02:09.470258 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:02:09.831613 containerd[1485]: time="2025-05-14T00:02:09.831496906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:09.832469 containerd[1485]: time="2025-05-14T00:02:09.832085906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 00:02:09.833083 containerd[1485]: time="2025-05-14T00:02:09.833018107Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:09.835280 containerd[1485]: time="2025-05-14T00:02:09.835041508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:09.835712 containerd[1485]: time="2025-05-14T00:02:09.835658548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 961.822307ms" May 14 00:02:09.835712 containerd[1485]: time="2025-05-14T00:02:09.835690668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 00:02:09.839551 containerd[1485]: time="2025-05-14T00:02:09.838659350Z" level=info msg="CreateContainer within sandbox \"76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 00:02:09.852032 containerd[1485]: time="2025-05-14T00:02:09.851996879Z" level=info msg="Container 233fd0035a016024439fc817ad5e789a118b5e98a9781434ab06bffad6e2f0b1: CDI devices from CRI Config.CDIDevices: []" May 14 00:02:09.863757 containerd[1485]: time="2025-05-14T00:02:09.863715126Z" level=info msg="CreateContainer within sandbox \"76c30abe7167f02724507cb032f4b281cedbb4c5540341d5e00adafcefd2b07d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"233fd0035a016024439fc817ad5e789a118b5e98a9781434ab06bffad6e2f0b1\"" May 14 00:02:09.864712 containerd[1485]: time="2025-05-14T00:02:09.864520807Z" level=info msg="StartContainer for \"233fd0035a016024439fc817ad5e789a118b5e98a9781434ab06bffad6e2f0b1\"" May 14 00:02:09.867799 containerd[1485]: time="2025-05-14T00:02:09.867772089Z" level=info msg="connecting to shim 233fd0035a016024439fc817ad5e789a118b5e98a9781434ab06bffad6e2f0b1" address="unix:///run/containerd/s/0fe53d140b4d7e2e80be73c7fb3029d3f2eaf62068e67051382e942b51498803" protocol=ttrpc version=3 May 14 00:02:09.887843 systemd[1]: Started cri-containerd-233fd0035a016024439fc817ad5e789a118b5e98a9781434ab06bffad6e2f0b1.scope - libcontainer container 233fd0035a016024439fc817ad5e789a118b5e98a9781434ab06bffad6e2f0b1. May 14 00:02:09.922719 containerd[1485]: time="2025-05-14T00:02:09.921394524Z" level=info msg="StartContainer for \"233fd0035a016024439fc817ad5e789a118b5e98a9781434ab06bffad6e2f0b1\" returns successfully" May 14 00:02:10.388830 kubelet[2705]: I0514 00:02:10.388581 2705 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 00:02:10.404706 kubelet[2705]: I0514 00:02:10.404665 2705 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 00:02:10.494096 kubelet[2705]: I0514 00:02:10.494020 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c9hj6" podStartSLOduration=25.3918399 podStartE2EDuration="29.494003354s" podCreationTimestamp="2025-05-14 00:01:41 +0000 UTC" firstStartedPulling="2025-05-14 00:02:05.734303215 +0000 UTC m=+45.517612566" lastFinishedPulling="2025-05-14 00:02:09.836466709 +0000 UTC m=+49.619776020" observedRunningTime="2025-05-14 00:02:10.492863834 +0000 UTC m=+50.276173265" watchObservedRunningTime="2025-05-14 00:02:10.494003354 +0000 UTC m=+50.277312705" May 14 00:02:11.191242 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:53774.service - OpenSSH per-connection server daemon (10.0.0.1:53774). May 14 00:02:11.262191 sshd[4890]: Accepted publickey for core from 10.0.0.1 port 53774 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:11.267591 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:11.277069 systemd-logind[1466]: New session 14 of user core. May 14 00:02:11.285804 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:02:11.479490 sshd[4892]: Connection closed by 10.0.0.1 port 53774 May 14 00:02:11.479259 sshd-session[4890]: pam_unix(sshd:session): session closed for user core May 14 00:02:11.483125 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:53774.service: Deactivated successfully. May 14 00:02:11.484897 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:02:11.485530 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. May 14 00:02:11.486409 systemd-logind[1466]: Removed session 14. May 14 00:02:12.544018 kubelet[2705]: I0514 00:02:12.543964 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:02:16.115557 containerd[1485]: time="2025-05-14T00:02:16.115487017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59c0fad6ea493402ed3f5fa176383d0288ce193f28c5453b74bb92f911b54af\" id:\"894928f93bc91578030d74d201a50f812ba2954f1bd9f28579340513d0c813ca\" pid:4928 exited_at:{seconds:1747180936 nanos:115102497}" May 14 00:02:16.493569 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:43324.service - OpenSSH per-connection server daemon (10.0.0.1:43324). May 14 00:02:16.544016 sshd[4941]: Accepted publickey for core from 10.0.0.1 port 43324 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:16.545454 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:16.549437 systemd-logind[1466]: New session 15 of user core. May 14 00:02:16.555843 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:02:16.691832 sshd[4943]: Connection closed by 10.0.0.1 port 43324 May 14 00:02:16.692181 sshd-session[4941]: pam_unix(sshd:session): session closed for user core May 14 00:02:16.695485 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:43324.service: Deactivated successfully. May 14 00:02:16.698475 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:02:16.699328 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. May 14 00:02:16.700251 systemd-logind[1466]: Removed session 15. May 14 00:02:21.705550 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:43328.service - OpenSSH per-connection server daemon (10.0.0.1:43328). May 14 00:02:21.757748 sshd[4959]: Accepted publickey for core from 10.0.0.1 port 43328 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:21.759118 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:21.763275 systemd-logind[1466]: New session 16 of user core. May 14 00:02:21.776822 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:02:21.917891 sshd[4961]: Connection closed by 10.0.0.1 port 43328 May 14 00:02:21.918248 sshd-session[4959]: pam_unix(sshd:session): session closed for user core May 14 00:02:21.936210 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:43328.service: Deactivated successfully. May 14 00:02:21.938051 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:02:21.938822 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. May 14 00:02:21.941145 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:43338.service - OpenSSH per-connection server daemon (10.0.0.1:43338). May 14 00:02:21.942188 systemd-logind[1466]: Removed session 16. May 14 00:02:21.992072 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 43338 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:21.993311 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:21.997933 systemd-logind[1466]: New session 17 of user core. May 14 00:02:22.007891 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:02:22.202090 sshd[4976]: Connection closed by 10.0.0.1 port 43338 May 14 00:02:22.202597 sshd-session[4973]: pam_unix(sshd:session): session closed for user core May 14 00:02:22.213167 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:43338.service: Deactivated successfully. May 14 00:02:22.216347 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:02:22.217523 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. May 14 00:02:22.221111 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:43342.service - OpenSSH per-connection server daemon (10.0.0.1:43342). May 14 00:02:22.222754 systemd-logind[1466]: Removed session 17. May 14 00:02:22.277812 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 43342 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:22.278971 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:22.283457 systemd-logind[1466]: New session 18 of user core. May 14 00:02:22.298813 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:02:22.688343 containerd[1485]: time="2025-05-14T00:02:22.688243317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c\" id:\"cb6ffb5bc2998d08546a1e70ed6f2a259ff26460d117d470ea366ce70570f533\" pid:5009 exited_at:{seconds:1747180942 nanos:687991877}" May 14 00:02:23.859069 sshd[4989]: Connection closed by 10.0.0.1 port 43342 May 14 00:02:23.859684 sshd-session[4986]: pam_unix(sshd:session): session closed for user core May 14 00:02:23.871897 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:43342.service: Deactivated successfully. May 14 00:02:23.874079 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:02:23.874825 systemd[1]: session-18.scope: Consumed 530ms CPU time, 69.4M memory peak. May 14 00:02:23.878171 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. May 14 00:02:23.880220 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:50218.service - OpenSSH per-connection server daemon (10.0.0.1:50218). May 14 00:02:23.887225 systemd-logind[1466]: Removed session 18. May 14 00:02:23.934381 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 50218 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:23.935846 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:23.940289 systemd-logind[1466]: New session 19 of user core. May 14 00:02:23.949835 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:02:24.212665 sshd[5035]: Connection closed by 10.0.0.1 port 50218 May 14 00:02:24.214152 sshd-session[5032]: pam_unix(sshd:session): session closed for user core May 14 00:02:24.234099 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:50222.service - OpenSSH per-connection server daemon (10.0.0.1:50222). May 14 00:02:24.234548 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:50218.service: Deactivated successfully. May 14 00:02:24.236838 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:02:24.239236 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. May 14 00:02:24.255381 systemd-logind[1466]: Removed session 19. May 14 00:02:24.296476 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 50222 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:24.298015 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:24.303797 systemd-logind[1466]: New session 20 of user core. May 14 00:02:24.311933 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:02:24.476282 sshd[5048]: Connection closed by 10.0.0.1 port 50222 May 14 00:02:24.476586 sshd-session[5043]: pam_unix(sshd:session): session closed for user core May 14 00:02:24.480282 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:50222.service: Deactivated successfully. May 14 00:02:24.482266 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:02:24.482934 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. May 14 00:02:24.484203 systemd-logind[1466]: Removed session 20. May 14 00:02:29.489104 systemd[1]: Started sshd@20-10.0.0.141:22-10.0.0.1:50224.service - OpenSSH per-connection server daemon (10.0.0.1:50224). May 14 00:02:29.557915 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 50224 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:29.559439 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:29.564253 systemd-logind[1466]: New session 21 of user core. May 14 00:02:29.568869 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:02:29.755124 sshd[5069]: Connection closed by 10.0.0.1 port 50224 May 14 00:02:29.755518 sshd-session[5067]: pam_unix(sshd:session): session closed for user core May 14 00:02:29.759946 systemd[1]: sshd@20-10.0.0.141:22-10.0.0.1:50224.service: Deactivated successfully. May 14 00:02:29.761948 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:02:29.762729 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. May 14 00:02:29.763603 systemd-logind[1466]: Removed session 21. May 14 00:02:33.646078 containerd[1485]: time="2025-05-14T00:02:33.646006216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05f2cbc76a459a98efe627fd4f77e44c491888d2f021ca86f6279eabc33bb60c\" id:\"fe8fd5cf706e2d3010286fb04bff34b319c2912afb4d15683ff3e678f6e9f460\" pid:5094 exited_at:{seconds:1747180953 nanos:645740692}" May 14 00:02:34.771933 systemd[1]: Started sshd@21-10.0.0.141:22-10.0.0.1:54438.service - OpenSSH per-connection server daemon (10.0.0.1:54438). May 14 00:02:34.827779 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 54438 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:34.828712 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:34.833844 systemd-logind[1466]: New session 22 of user core. May 14 00:02:34.840943 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:02:34.998043 sshd[5107]: Connection closed by 10.0.0.1 port 54438 May 14 00:02:34.998418 sshd-session[5105]: pam_unix(sshd:session): session closed for user core May 14 00:02:35.002595 systemd[1]: sshd@21-10.0.0.141:22-10.0.0.1:54438.service: Deactivated successfully. May 14 00:02:35.005075 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:02:35.006057 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. May 14 00:02:35.007137 systemd-logind[1466]: Removed session 22. May 14 00:02:40.010770 systemd[1]: Started sshd@22-10.0.0.141:22-10.0.0.1:54442.service - OpenSSH per-connection server daemon (10.0.0.1:54442). May 14 00:02:40.064291 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 54442 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 14 00:02:40.064053 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:40.068869 systemd-logind[1466]: New session 23 of user core. May 14 00:02:40.081928 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:02:40.243025 sshd[5133]: Connection closed by 10.0.0.1 port 54442 May 14 00:02:40.243422 sshd-session[5131]: pam_unix(sshd:session): session closed for user core May 14 00:02:40.247312 systemd[1]: sshd@22-10.0.0.141:22-10.0.0.1:54442.service: Deactivated successfully. May 14 00:02:40.250247 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:02:40.251515 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. May 14 00:02:40.252295 systemd-logind[1466]: Removed session 23.