May 15 12:10:56.815578 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 12:10:56.815599 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu May 15 10:40:40 -00 2025 May 15 12:10:56.815608 kernel: KASLR enabled May 15 12:10:56.815613 kernel: efi: EFI v2.7 by EDK II May 15 12:10:56.815619 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 May 15 12:10:56.815624 kernel: random: crng init done May 15 12:10:56.815631 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 15 12:10:56.815636 kernel: secureboot: Secure boot enabled May 15 12:10:56.815642 kernel: ACPI: Early table checksum verification disabled May 15 12:10:56.815649 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) May 15 12:10:56.815655 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 12:10:56.815660 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815666 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815672 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815679 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815686 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815692 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815699 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815705 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815711 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:10:56.815717 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 12:10:56.815723 kernel: ACPI: Use ACPI SPCR as default console: Yes May 15 12:10:56.815729 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 12:10:56.815735 kernel: NODE_DATA(0) allocated [mem 0xdc736dc0-0xdc73dfff] May 15 12:10:56.815740 kernel: Zone ranges: May 15 12:10:56.815747 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 12:10:56.815753 kernel: DMA32 empty May 15 12:10:56.815759 kernel: Normal empty May 15 12:10:56.815765 kernel: Device empty May 15 12:10:56.815771 kernel: Movable zone start for each node May 15 12:10:56.815777 kernel: Early memory node ranges May 15 12:10:56.815782 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] May 15 12:10:56.815788 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] May 15 12:10:56.815795 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] May 15 12:10:56.815801 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] May 15 12:10:56.815806 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] May 15 12:10:56.815812 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] May 15 12:10:56.815819 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] May 15 12:10:56.815826 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 15 12:10:56.815832 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 12:10:56.815840 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 12:10:56.815847 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 12:10:56.815853 kernel: psci: probing for conduit method from ACPI. May 15 12:10:56.815859 kernel: psci: PSCIv1.1 detected in firmware. May 15 12:10:56.815867 kernel: psci: Using standard PSCI v0.2 function IDs May 15 12:10:56.815873 kernel: psci: Trusted OS migration not required May 15 12:10:56.815880 kernel: psci: SMC Calling Convention v1.1 May 15 12:10:56.815886 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 12:10:56.815892 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 15 12:10:56.815899 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 15 12:10:56.815905 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 12:10:56.815911 kernel: Detected PIPT I-cache on CPU0 May 15 12:10:56.815918 kernel: CPU features: detected: GIC system register CPU interface May 15 12:10:56.815925 kernel: CPU features: detected: Spectre-v4 May 15 12:10:56.815932 kernel: CPU features: detected: Spectre-BHB May 15 12:10:56.815938 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 12:10:56.815944 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 12:10:56.815951 kernel: CPU features: detected: ARM erratum 1418040 May 15 12:10:56.815957 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 12:10:56.815963 kernel: alternatives: applying boot alternatives May 15 12:10:56.815970 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bf509bd8a8efc068ea7b7cbdc99b42bf1cbaf8a0ba93f67c8f1cf632dc3496d8 May 15 12:10:56.815977 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 12:10:56.815984 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 12:10:56.815990 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 12:10:56.815998 kernel: Fallback order for Node 0: 0 May 15 12:10:56.816004 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 15 12:10:56.816011 kernel: Policy zone: DMA May 15 12:10:56.816017 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 12:10:56.816023 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 15 12:10:56.816029 kernel: software IO TLB: area num 4. May 15 12:10:56.816036 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 15 12:10:56.816042 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) May 15 12:10:56.816049 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 12:10:56.816055 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 12:10:56.816062 kernel: rcu: RCU event tracing is enabled. May 15 12:10:56.816068 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 12:10:56.816076 kernel: Trampoline variant of Tasks RCU enabled. May 15 12:10:56.816083 kernel: Tracing variant of Tasks RCU enabled. May 15 12:10:56.816089 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 12:10:56.816096 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 12:10:56.816102 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 12:10:56.816109 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 12:10:56.816115 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 12:10:56.816121 kernel: GICv3: 256 SPIs implemented May 15 12:10:56.816128 kernel: GICv3: 0 Extended SPIs implemented May 15 12:10:56.816134 kernel: Root IRQ handler: gic_handle_irq May 15 12:10:56.816140 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 12:10:56.816147 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 15 12:10:56.816154 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 12:10:56.816160 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 12:10:56.816167 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 15 12:10:56.816173 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 15 12:10:56.816179 kernel: GICv3: using LPI property table @0x0000000040100000 May 15 12:10:56.816201 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 15 12:10:56.816209 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 12:10:56.816215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 12:10:56.816222 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 12:10:56.816228 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 12:10:56.816235 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 12:10:56.816243 kernel: arm-pv: using stolen time PV May 15 12:10:56.816250 kernel: Console: colour dummy device 80x25 May 15 12:10:56.816256 kernel: ACPI: Core revision 20240827 May 15 12:10:56.816263 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 12:10:56.816270 kernel: pid_max: default: 32768 minimum: 301 May 15 12:10:56.816276 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 12:10:56.816283 kernel: landlock: Up and running. May 15 12:10:56.816289 kernel: SELinux: Initializing. May 15 12:10:56.816296 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:10:56.816303 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:10:56.816310 kernel: rcu: Hierarchical SRCU implementation. May 15 12:10:56.816317 kernel: rcu: Max phase no-delay instances is 400. May 15 12:10:56.816323 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 12:10:56.816330 kernel: Remapping and enabling EFI services. May 15 12:10:56.816336 kernel: smp: Bringing up secondary CPUs ... May 15 12:10:56.816343 kernel: Detected PIPT I-cache on CPU1 May 15 12:10:56.816349 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 12:10:56.816356 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 15 12:10:56.816364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 12:10:56.816376 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 12:10:56.816382 kernel: Detected PIPT I-cache on CPU2 May 15 12:10:56.816390 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 12:10:56.816397 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 15 12:10:56.816405 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 12:10:56.816411 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 12:10:56.816418 kernel: Detected PIPT I-cache on CPU3 May 15 12:10:56.816425 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 12:10:56.816433 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 15 12:10:56.816440 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 12:10:56.816447 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 12:10:56.816453 kernel: smp: Brought up 1 node, 4 CPUs May 15 12:10:56.816460 kernel: SMP: Total of 4 processors activated. May 15 12:10:56.816467 kernel: CPU: All CPU(s) started at EL1 May 15 12:10:56.816474 kernel: CPU features: detected: 32-bit EL0 Support May 15 12:10:56.816481 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 12:10:56.816488 kernel: CPU features: detected: Common not Private translations May 15 12:10:56.816496 kernel: CPU features: detected: CRC32 instructions May 15 12:10:56.816503 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 12:10:56.816510 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 12:10:56.816517 kernel: CPU features: detected: LSE atomic instructions May 15 12:10:56.816524 kernel: CPU features: detected: Privileged Access Never May 15 12:10:56.816530 kernel: CPU features: detected: RAS Extension Support May 15 12:10:56.816537 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 12:10:56.816544 kernel: alternatives: applying system-wide alternatives May 15 12:10:56.816551 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 15 12:10:56.816560 kernel: Memory: 2438880K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 127640K reserved, 0K cma-reserved) May 15 12:10:56.816567 kernel: devtmpfs: initialized May 15 12:10:56.816573 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 12:10:56.816580 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 12:10:56.816587 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 12:10:56.816594 kernel: 0 pages in range for non-PLT usage May 15 12:10:56.816601 kernel: 508544 pages in range for PLT usage May 15 12:10:56.816608 kernel: pinctrl core: initialized pinctrl subsystem May 15 12:10:56.816615 kernel: SMBIOS 3.0.0 present. May 15 12:10:56.816623 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 12:10:56.816630 kernel: DMI: Memory slots populated: 1/1 May 15 12:10:56.816636 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 12:10:56.816643 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 12:10:56.816650 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 12:10:56.816657 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 12:10:56.816664 kernel: audit: initializing netlink subsys (disabled) May 15 12:10:56.816678 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 May 15 12:10:56.816685 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 12:10:56.816693 kernel: cpuidle: using governor menu May 15 12:10:56.816700 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 12:10:56.816707 kernel: ASID allocator initialised with 32768 entries May 15 12:10:56.816714 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 12:10:56.816721 kernel: Serial: AMBA PL011 UART driver May 15 12:10:56.816728 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 12:10:56.816735 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 12:10:56.816742 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 12:10:56.816750 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 12:10:56.816756 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 12:10:56.816763 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 12:10:56.816770 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 12:10:56.816777 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 12:10:56.816784 kernel: ACPI: Added _OSI(Module Device) May 15 12:10:56.816790 kernel: ACPI: Added _OSI(Processor Device) May 15 12:10:56.816797 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 12:10:56.816804 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 12:10:56.816811 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 12:10:56.816819 kernel: ACPI: Interpreter enabled May 15 12:10:56.816826 kernel: ACPI: Using GIC for interrupt routing May 15 12:10:56.816833 kernel: ACPI: MCFG table detected, 1 entries May 15 12:10:56.816839 kernel: ACPI: CPU0 has been hot-added May 15 12:10:56.816846 kernel: ACPI: CPU1 has been hot-added May 15 12:10:56.816853 kernel: ACPI: CPU2 has been hot-added May 15 12:10:56.816859 kernel: ACPI: CPU3 has been hot-added May 15 12:10:56.816866 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 12:10:56.816873 kernel: printk: legacy console [ttyAMA0] enabled May 15 12:10:56.816881 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 12:10:56.816999 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 12:10:56.817064 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 12:10:56.817124 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 12:10:56.817181 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 12:10:56.817297 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 12:10:56.817307 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 12:10:56.817318 kernel: PCI host bridge to bus 0000:00 May 15 12:10:56.817389 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 12:10:56.817449 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 12:10:56.818248 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 12:10:56.818322 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 12:10:56.818399 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 15 12:10:56.818477 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 15 12:10:56.818545 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 15 12:10:56.818607 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 15 12:10:56.818667 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 15 12:10:56.818728 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 15 12:10:56.818793 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 15 12:10:56.818855 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 15 12:10:56.818914 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 12:10:56.818970 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 12:10:56.819032 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 12:10:56.819042 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 12:10:56.819049 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 12:10:56.819057 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 12:10:56.819067 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 12:10:56.819074 kernel: iommu: Default domain type: Translated May 15 12:10:56.819083 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 12:10:56.819092 kernel: efivars: Registered efivars operations May 15 12:10:56.819100 kernel: vgaarb: loaded May 15 12:10:56.819107 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 12:10:56.819114 kernel: VFS: Disk quotas dquot_6.6.0 May 15 12:10:56.819121 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 12:10:56.819128 kernel: pnp: PnP ACPI init May 15 12:10:56.819220 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 12:10:56.819232 kernel: pnp: PnP ACPI: found 1 devices May 15 12:10:56.819242 kernel: NET: Registered PF_INET protocol family May 15 12:10:56.819249 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 12:10:56.819256 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 12:10:56.819263 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 12:10:56.819270 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 12:10:56.819277 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 12:10:56.819284 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 12:10:56.819291 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:10:56.819298 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:10:56.819306 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 12:10:56.819313 kernel: PCI: CLS 0 bytes, default 64 May 15 12:10:56.819320 kernel: kvm [1]: HYP mode not available May 15 12:10:56.819327 kernel: Initialise system trusted keyrings May 15 12:10:56.819333 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 12:10:56.819340 kernel: Key type asymmetric registered May 15 12:10:56.819347 kernel: Asymmetric key parser 'x509' registered May 15 12:10:56.819354 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 12:10:56.819361 kernel: io scheduler mq-deadline registered May 15 12:10:56.819369 kernel: io scheduler kyber registered May 15 12:10:56.819376 kernel: io scheduler bfq registered May 15 12:10:56.819383 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 12:10:56.819390 kernel: ACPI: button: Power Button [PWRB] May 15 12:10:56.819397 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 12:10:56.819463 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 12:10:56.819472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 12:10:56.819479 kernel: thunder_xcv, ver 1.0 May 15 12:10:56.819486 kernel: thunder_bgx, ver 1.0 May 15 12:10:56.819494 kernel: nicpf, ver 1.0 May 15 12:10:56.819501 kernel: nicvf, ver 1.0 May 15 12:10:56.819568 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 12:10:56.819627 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T12:10:56 UTC (1747311056) May 15 12:10:56.819637 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 12:10:56.819644 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 15 12:10:56.819651 kernel: watchdog: NMI not fully supported May 15 12:10:56.819658 kernel: watchdog: Hard watchdog permanently disabled May 15 12:10:56.819666 kernel: NET: Registered PF_INET6 protocol family May 15 12:10:56.819673 kernel: Segment Routing with IPv6 May 15 12:10:56.819680 kernel: In-situ OAM (IOAM) with IPv6 May 15 12:10:56.819687 kernel: NET: Registered PF_PACKET protocol family May 15 12:10:56.819694 kernel: Key type dns_resolver registered May 15 12:10:56.819701 kernel: registered taskstats version 1 May 15 12:10:56.819708 kernel: Loading compiled-in X.509 certificates May 15 12:10:56.819716 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 6c8c7c40bf8565fead88558d446d0157ca21f08d' May 15 12:10:56.819722 kernel: Demotion targets for Node 0: null May 15 12:10:56.819731 kernel: Key type .fscrypt registered May 15 12:10:56.819738 kernel: Key type fscrypt-provisioning registered May 15 12:10:56.819744 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 12:10:56.819751 kernel: ima: Allocated hash algorithm: sha1 May 15 12:10:56.819758 kernel: ima: No architecture policies found May 15 12:10:56.819765 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 12:10:56.819772 kernel: clk: Disabling unused clocks May 15 12:10:56.819779 kernel: PM: genpd: Disabling unused power domains May 15 12:10:56.819786 kernel: Warning: unable to open an initial console. May 15 12:10:56.819794 kernel: Freeing unused kernel memory: 39424K May 15 12:10:56.819801 kernel: Run /init as init process May 15 12:10:56.819807 kernel: with arguments: May 15 12:10:56.819814 kernel: /init May 15 12:10:56.819821 kernel: with environment: May 15 12:10:56.819828 kernel: HOME=/ May 15 12:10:56.819835 kernel: TERM=linux May 15 12:10:56.819841 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 12:10:56.819850 systemd[1]: Successfully made /usr/ read-only. May 15 12:10:56.819860 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:10:56.819869 systemd[1]: Detected virtualization kvm. May 15 12:10:56.819876 systemd[1]: Detected architecture arm64. May 15 12:10:56.819884 systemd[1]: Running in initrd. May 15 12:10:56.819891 systemd[1]: No hostname configured, using default hostname. May 15 12:10:56.819912 systemd[1]: Hostname set to . May 15 12:10:56.819920 systemd[1]: Initializing machine ID from VM UUID. May 15 12:10:56.819929 systemd[1]: Queued start job for default target initrd.target. May 15 12:10:56.819937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:10:56.819944 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:10:56.819952 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 12:10:56.819960 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:10:56.819967 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 12:10:56.819975 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 12:10:56.819990 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 12:10:56.819998 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 12:10:56.820017 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:10:56.820026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:10:56.820034 systemd[1]: Reached target paths.target - Path Units. May 15 12:10:56.820041 systemd[1]: Reached target slices.target - Slice Units. May 15 12:10:56.820051 systemd[1]: Reached target swap.target - Swaps. May 15 12:10:56.820058 systemd[1]: Reached target timers.target - Timer Units. May 15 12:10:56.820067 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:10:56.820074 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:10:56.820082 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 12:10:56.820089 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 12:10:56.820097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:10:56.820104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:10:56.820111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:10:56.820119 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:10:56.820127 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 12:10:56.820135 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:10:56.820142 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 12:10:56.820150 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 12:10:56.820157 systemd[1]: Starting systemd-fsck-usr.service... May 15 12:10:56.820164 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:10:56.820171 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:10:56.820179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:10:56.820203 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:10:56.820214 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 12:10:56.820222 systemd[1]: Finished systemd-fsck-usr.service. May 15 12:10:56.820229 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:10:56.820252 systemd-journald[244]: Collecting audit messages is disabled. May 15 12:10:56.820272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:10:56.820280 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 12:10:56.820288 systemd-journald[244]: Journal started May 15 12:10:56.820307 systemd-journald[244]: Runtime Journal (/run/log/journal/eca8410a27834011a22f1310c38b3083) is 6M, max 48.5M, 42.4M free. May 15 12:10:56.806322 systemd-modules-load[246]: Inserted module 'overlay' May 15 12:10:56.822443 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:10:56.826287 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 12:10:56.829228 kernel: Bridge firewalling registered May 15 12:10:56.828128 systemd-modules-load[246]: Inserted module 'br_netfilter' May 15 12:10:56.828713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:10:56.831231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:10:56.834471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:10:56.843757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:10:56.845287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:10:56.848473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:10:56.850348 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 12:10:56.852289 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 12:10:56.853406 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:10:56.856502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:10:56.858292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:10:56.861857 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:10:56.866987 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bf509bd8a8efc068ea7b7cbdc99b42bf1cbaf8a0ba93f67c8f1cf632dc3496d8 May 15 12:10:56.900252 systemd-resolved[295]: Positive Trust Anchors: May 15 12:10:56.900269 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:10:56.900300 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:10:56.904998 systemd-resolved[295]: Defaulting to hostname 'linux'. May 15 12:10:56.906065 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:10:56.909075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:10:56.948223 kernel: SCSI subsystem initialized May 15 12:10:56.952206 kernel: Loading iSCSI transport class v2.0-870. May 15 12:10:56.960227 kernel: iscsi: registered transport (tcp) May 15 12:10:56.972247 kernel: iscsi: registered transport (qla4xxx) May 15 12:10:56.972275 kernel: QLogic iSCSI HBA Driver May 15 12:10:56.989060 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:10:57.004247 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:10:57.005758 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:10:57.049647 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 12:10:57.051849 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 12:10:57.111225 kernel: raid6: neonx8 gen() 15767 MB/s May 15 12:10:57.128211 kernel: raid6: neonx4 gen() 15707 MB/s May 15 12:10:57.145198 kernel: raid6: neonx2 gen() 13196 MB/s May 15 12:10:57.162208 kernel: raid6: neonx1 gen() 10403 MB/s May 15 12:10:57.179204 kernel: raid6: int64x8 gen() 6851 MB/s May 15 12:10:57.196209 kernel: raid6: int64x4 gen() 7315 MB/s May 15 12:10:57.214200 kernel: raid6: int64x2 gen() 6701 MB/s May 15 12:10:57.231210 kernel: raid6: int64x1 gen() 5025 MB/s May 15 12:10:57.231243 kernel: raid6: using algorithm neonx8 gen() 15767 MB/s May 15 12:10:57.248220 kernel: raid6: .... xor() 11967 MB/s, rmw enabled May 15 12:10:57.248239 kernel: raid6: using neon recovery algorithm May 15 12:10:57.254419 kernel: xor: measuring software checksum speed May 15 12:10:57.255467 kernel: 8regs : 1667 MB/sec May 15 12:10:57.255482 kernel: 32regs : 21687 MB/sec May 15 12:10:57.256392 kernel: arm64_neon : 28225 MB/sec May 15 12:10:57.256416 kernel: xor: using function: arm64_neon (28225 MB/sec) May 15 12:10:57.314228 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 12:10:57.321289 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 12:10:57.324912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:10:57.352112 systemd-udevd[499]: Using default interface naming scheme 'v255'. May 15 12:10:57.357212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:10:57.359231 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 12:10:57.391909 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation May 15 12:10:57.418948 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:10:57.422452 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:10:57.471228 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:10:57.473462 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 12:10:57.522603 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 12:10:57.530497 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 12:10:57.530587 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 12:10:57.530599 kernel: GPT:9289727 != 19775487 May 15 12:10:57.530609 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 12:10:57.530624 kernel: GPT:9289727 != 19775487 May 15 12:10:57.530633 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 12:10:57.530641 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:10:57.530588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:10:57.530700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:10:57.532770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:10:57.535097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:10:57.567074 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 12:10:57.573971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:10:57.575358 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 12:10:57.584369 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 12:10:57.592621 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 12:10:57.598974 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 12:10:57.600162 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 12:10:57.602268 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:10:57.605221 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:10:57.607114 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:10:57.609978 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 12:10:57.611822 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 12:10:57.630703 disk-uuid[590]: Primary Header is updated. May 15 12:10:57.630703 disk-uuid[590]: Secondary Entries is updated. May 15 12:10:57.630703 disk-uuid[590]: Secondary Header is updated. May 15 12:10:57.634209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:10:57.634684 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 12:10:58.649216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:10:58.649525 disk-uuid[595]: The operation has completed successfully. May 15 12:10:58.676880 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 12:10:58.676983 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 12:10:58.700971 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 12:10:58.716963 sh[610]: Success May 15 12:10:58.732870 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 12:10:58.732908 kernel: device-mapper: uevent: version 1.0.3 May 15 12:10:58.732919 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 12:10:58.739264 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 15 12:10:58.764602 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 12:10:58.766938 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 12:10:58.782342 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 12:10:58.791180 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 12:10:58.791244 kernel: BTRFS: device fsid 0a747134-9b18-4ef1-ad11-5025524c86c8 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (622) May 15 12:10:58.793620 kernel: BTRFS info (device dm-0): first mount of filesystem 0a747134-9b18-4ef1-ad11-5025524c86c8 May 15 12:10:58.793649 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 12:10:58.793659 kernel: BTRFS info (device dm-0): using free-space-tree May 15 12:10:58.796730 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 12:10:58.797734 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 12:10:58.799228 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 12:10:58.799872 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 12:10:58.801574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 12:10:58.825777 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (655) May 15 12:10:58.825808 kernel: BTRFS info (device vda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 12:10:58.825818 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 12:10:58.827226 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:10:58.832232 kernel: BTRFS info (device vda6): last unmount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 12:10:58.833005 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 12:10:58.834946 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 12:10:58.900118 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:10:58.903383 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:10:58.946065 systemd-networkd[799]: lo: Link UP May 15 12:10:58.946075 systemd-networkd[799]: lo: Gained carrier May 15 12:10:58.946769 systemd-networkd[799]: Enumeration completed May 15 12:10:58.947062 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:10:58.947541 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:10:58.947544 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:10:58.947896 systemd-networkd[799]: eth0: Link UP May 15 12:10:58.947899 systemd-networkd[799]: eth0: Gained carrier May 15 12:10:58.947907 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:10:58.948491 systemd[1]: Reached target network.target - Network. May 15 12:10:58.972264 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 12:10:58.978175 ignition[696]: Ignition 2.21.0 May 15 12:10:58.978204 ignition[696]: Stage: fetch-offline May 15 12:10:58.978236 ignition[696]: no configs at "/usr/lib/ignition/base.d" May 15 12:10:58.978244 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:10:58.978424 ignition[696]: parsed url from cmdline: "" May 15 12:10:58.978427 ignition[696]: no config URL provided May 15 12:10:58.978431 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:10:58.978440 ignition[696]: no config at "/usr/lib/ignition/user.ign" May 15 12:10:58.978457 ignition[696]: op(1): [started] loading QEMU firmware config module May 15 12:10:58.978460 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 12:10:58.989841 ignition[696]: op(1): [finished] loading QEMU firmware config module May 15 12:10:59.027810 ignition[696]: parsing config with SHA512: 5d07ba889b71ea59141c6f9e51081c4b79d600129f0175c6ef00c63282a561eeab55f423f968fea91e67636dd2fa6af20ac242dd9d2c436143c44899f7ce481d May 15 12:10:59.032451 unknown[696]: fetched base config from "system" May 15 12:10:59.033246 unknown[696]: fetched user config from "qemu" May 15 12:10:59.033625 ignition[696]: fetch-offline: fetch-offline passed May 15 12:10:59.033684 ignition[696]: Ignition finished successfully May 15 12:10:59.036247 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:10:59.037515 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 12:10:59.038246 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 12:10:59.066156 ignition[813]: Ignition 2.21.0 May 15 12:10:59.066172 ignition[813]: Stage: kargs May 15 12:10:59.066658 ignition[813]: no configs at "/usr/lib/ignition/base.d" May 15 12:10:59.066668 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:10:59.067805 ignition[813]: kargs: kargs passed May 15 12:10:59.070452 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 12:10:59.067858 ignition[813]: Ignition finished successfully May 15 12:10:59.072305 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 12:10:59.104501 ignition[821]: Ignition 2.21.0 May 15 12:10:59.104515 ignition[821]: Stage: disks May 15 12:10:59.104746 ignition[821]: no configs at "/usr/lib/ignition/base.d" May 15 12:10:59.104765 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:10:59.105831 ignition[821]: disks: disks passed May 15 12:10:59.107498 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 12:10:59.105875 ignition[821]: Ignition finished successfully May 15 12:10:59.108923 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 12:10:59.110247 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 12:10:59.112145 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:10:59.113662 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:10:59.115442 systemd[1]: Reached target basic.target - Basic System. May 15 12:10:59.117986 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 12:10:59.144935 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 12:10:59.149734 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 12:10:59.151868 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 12:10:59.222213 kernel: EXT4-fs (vda9): mounted filesystem 7753583f-75f7-43aa-89cb-b5e5a7f28ed5 r/w with ordered data mode. Quota mode: none. May 15 12:10:59.222436 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 12:10:59.223396 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 12:10:59.225657 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:10:59.227249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 12:10:59.228245 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 12:10:59.228286 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 12:10:59.228308 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:10:59.240327 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 12:10:59.242762 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 12:10:59.248024 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (840) May 15 12:10:59.248051 kernel: BTRFS info (device vda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 12:10:59.248853 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 12:10:59.248871 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:10:59.254311 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:10:59.285701 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory May 15 12:10:59.288714 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory May 15 12:10:59.292385 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory May 15 12:10:59.295174 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory May 15 12:10:59.364257 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 12:10:59.367109 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 12:10:59.368652 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 12:10:59.385237 kernel: BTRFS info (device vda6): last unmount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 12:10:59.399299 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 12:10:59.410646 ignition[954]: INFO : Ignition 2.21.0 May 15 12:10:59.410646 ignition[954]: INFO : Stage: mount May 15 12:10:59.412152 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:10:59.412152 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:10:59.415265 ignition[954]: INFO : mount: mount passed May 15 12:10:59.415265 ignition[954]: INFO : Ignition finished successfully May 15 12:10:59.414812 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 12:10:59.418282 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 12:10:59.920585 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 12:10:59.922045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:10:59.947927 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (966) May 15 12:10:59.947959 kernel: BTRFS info (device vda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 12:10:59.947970 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 12:10:59.948645 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:10:59.951611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:10:59.979425 ignition[983]: INFO : Ignition 2.21.0 May 15 12:10:59.979425 ignition[983]: INFO : Stage: files May 15 12:10:59.981835 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:10:59.981835 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:10:59.981835 ignition[983]: DEBUG : files: compiled without relabeling support, skipping May 15 12:10:59.981835 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 12:10:59.981835 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 12:10:59.988361 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 12:10:59.988361 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 12:10:59.988361 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 12:10:59.988361 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 12:10:59.988361 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 12:10:59.983717 unknown[983]: wrote ssh authorized keys file for user: core May 15 12:11:00.145001 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 12:11:00.260496 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 12:11:00.260496 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:11:00.264532 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 12:11:00.281581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 12:11:00.281581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 12:11:00.281581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 15 12:11:00.384313 systemd-networkd[799]: eth0: Gained IPv6LL May 15 12:11:00.582911 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 12:11:00.770635 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 12:11:00.770635 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 12:11:00.774554 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:11:00.778899 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:11:00.778899 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 12:11:00.778899 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 12:11:00.778899 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 12:11:00.778899 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 12:11:00.778899 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 12:11:00.778899 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 15 12:11:00.800008 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 12:11:00.803012 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 12:11:00.805823 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 15 12:11:00.805823 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 15 12:11:00.805823 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 15 12:11:00.805823 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 12:11:00.805823 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 12:11:00.805823 ignition[983]: INFO : files: files passed May 15 12:11:00.805823 ignition[983]: INFO : Ignition finished successfully May 15 12:11:00.807401 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 12:11:00.809906 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 12:11:00.812083 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 12:11:00.829000 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 12:11:00.829103 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 12:11:00.832063 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory May 15 12:11:00.835492 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:11:00.835492 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 12:11:00.838699 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:11:00.837705 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:11:00.840067 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 12:11:00.842819 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 12:11:00.880276 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 12:11:00.880388 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 12:11:00.882588 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 12:11:00.884429 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 12:11:00.886247 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 12:11:00.887028 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 12:11:00.910287 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:11:00.914339 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 12:11:00.933877 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 12:11:00.935209 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:11:00.937266 systemd[1]: Stopped target timers.target - Timer Units. May 15 12:11:00.939118 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 12:11:00.939261 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:11:00.941819 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 12:11:00.943789 systemd[1]: Stopped target basic.target - Basic System. May 15 12:11:00.945456 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 12:11:00.947126 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:11:00.949062 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 12:11:00.951054 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 12:11:00.953059 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 12:11:00.954973 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:11:00.956970 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 12:11:00.958983 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 12:11:00.960749 systemd[1]: Stopped target swap.target - Swaps. May 15 12:11:00.962333 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 12:11:00.962456 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 12:11:00.964945 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 12:11:00.966953 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:11:00.968939 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 12:11:00.969744 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:11:00.971247 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 12:11:00.971404 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 12:11:00.974461 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 12:11:00.975009 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:11:00.978362 systemd[1]: Stopped target paths.target - Path Units. May 15 12:11:00.980038 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 12:11:00.986273 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:11:00.987518 systemd[1]: Stopped target slices.target - Slice Units. May 15 12:11:00.989568 systemd[1]: Stopped target sockets.target - Socket Units. May 15 12:11:00.992342 systemd[1]: iscsid.socket: Deactivated successfully. May 15 12:11:00.992428 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:11:00.994952 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 12:11:00.995062 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:11:00.998046 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 12:11:00.998521 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:11:01.000498 systemd[1]: ignition-files.service: Deactivated successfully. May 15 12:11:01.000609 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 12:11:01.004870 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 12:11:01.006019 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 12:11:01.006143 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:11:01.016725 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 12:11:01.017611 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 12:11:01.017754 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:11:01.019845 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 12:11:01.019945 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:11:01.026662 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 12:11:01.026750 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 12:11:01.032347 ignition[1039]: INFO : Ignition 2.21.0 May 15 12:11:01.032347 ignition[1039]: INFO : Stage: umount May 15 12:11:01.032347 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:11:01.032347 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:11:01.032347 ignition[1039]: INFO : umount: umount passed May 15 12:11:01.032347 ignition[1039]: INFO : Ignition finished successfully May 15 12:11:01.032525 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 12:11:01.032649 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 12:11:01.035643 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 12:11:01.036090 systemd[1]: Stopped target network.target - Network. May 15 12:11:01.043137 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 12:11:01.043235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 12:11:01.045088 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 12:11:01.045131 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 12:11:01.049344 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 12:11:01.049400 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 12:11:01.050917 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 12:11:01.050961 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 12:11:01.061457 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 12:11:01.062970 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 12:11:01.073464 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 12:11:01.073554 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 12:11:01.077603 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 12:11:01.077788 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 12:11:01.077873 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 12:11:01.081182 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 12:11:01.081974 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 12:11:01.083745 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 12:11:01.083778 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 12:11:01.086683 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 12:11:01.092395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 12:11:01.092468 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:11:01.094438 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:11:01.094483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:11:01.097274 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 12:11:01.097383 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 12:11:01.099408 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 12:11:01.099454 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:11:01.102670 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:11:01.106280 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 12:11:01.106337 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 12:11:01.106640 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 12:11:01.106738 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 12:11:01.110588 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 12:11:01.110676 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 12:11:01.119813 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 12:11:01.121333 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:11:01.125870 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 12:11:01.125961 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 12:11:01.127175 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 12:11:01.127241 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 12:11:01.128312 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 12:11:01.128340 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:11:01.129327 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 12:11:01.129372 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 12:11:01.132168 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 12:11:01.132233 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 12:11:01.133905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 12:11:01.133952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:11:01.136692 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 12:11:01.137990 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 12:11:01.138044 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:11:01.142312 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 12:11:01.142356 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:11:01.144857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:11:01.144898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:11:01.148923 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 12:11:01.148972 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 12:11:01.149005 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:11:01.157464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 12:11:01.157557 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 12:11:01.159170 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 12:11:01.163901 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 12:11:01.181681 systemd[1]: Switching root. May 15 12:11:01.214398 systemd-journald[244]: Journal stopped May 15 12:11:02.003122 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 15 12:11:02.003172 kernel: SELinux: policy capability network_peer_controls=1 May 15 12:11:02.003205 kernel: SELinux: policy capability open_perms=1 May 15 12:11:02.003218 kernel: SELinux: policy capability extended_socket_class=1 May 15 12:11:02.003231 kernel: SELinux: policy capability always_check_network=0 May 15 12:11:02.003241 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 12:11:02.003251 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 12:11:02.003260 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 12:11:02.003271 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 12:11:02.003281 kernel: SELinux: policy capability userspace_initial_context=0 May 15 12:11:02.003290 kernel: audit: type=1403 audit(1747311061.367:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 12:11:02.003305 systemd[1]: Successfully loaded SELinux policy in 50.004ms. May 15 12:11:02.003327 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.067ms. May 15 12:11:02.003338 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:11:02.003350 systemd[1]: Detected virtualization kvm. May 15 12:11:02.003361 systemd[1]: Detected architecture arm64. May 15 12:11:02.003370 systemd[1]: Detected first boot. May 15 12:11:02.003380 systemd[1]: Initializing machine ID from VM UUID. May 15 12:11:02.003391 zram_generator::config[1084]: No configuration found. May 15 12:11:02.003401 kernel: NET: Registered PF_VSOCK protocol family May 15 12:11:02.003411 systemd[1]: Populated /etc with preset unit settings. May 15 12:11:02.003422 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 12:11:02.003432 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 12:11:02.003441 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 12:11:02.003457 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 12:11:02.003468 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 12:11:02.003478 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 12:11:02.003488 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 12:11:02.003499 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 12:11:02.003509 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 12:11:02.003520 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 12:11:02.003529 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 12:11:02.003541 systemd[1]: Created slice user.slice - User and Session Slice. May 15 12:11:02.003555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:11:02.003565 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:11:02.003575 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 12:11:02.003585 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 12:11:02.003596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 12:11:02.003606 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:11:02.003616 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 12:11:02.003626 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:11:02.003637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:11:02.003647 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 12:11:02.003657 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 12:11:02.003667 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 12:11:02.003677 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 12:11:02.003687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:11:02.003697 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:11:02.003707 systemd[1]: Reached target slices.target - Slice Units. May 15 12:11:02.003719 systemd[1]: Reached target swap.target - Swaps. May 15 12:11:02.003729 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 12:11:02.003738 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 12:11:02.003748 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 12:11:02.003759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:11:02.003769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:11:02.003779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:11:02.003789 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 12:11:02.003799 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 12:11:02.003810 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 12:11:02.003820 systemd[1]: Mounting media.mount - External Media Directory... May 15 12:11:02.003830 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 12:11:02.003840 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 12:11:02.003849 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 12:11:02.003860 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 12:11:02.003874 systemd[1]: Reached target machines.target - Containers. May 15 12:11:02.003883 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 12:11:02.003893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:11:02.003905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:11:02.003916 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 12:11:02.003925 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:11:02.003936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:11:02.003946 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:11:02.003956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 12:11:02.003966 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:11:02.003976 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 12:11:02.003988 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 12:11:02.003998 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 12:11:02.004008 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 12:11:02.004017 kernel: fuse: init (API version 7.41) May 15 12:11:02.004027 systemd[1]: Stopped systemd-fsck-usr.service. May 15 12:11:02.004037 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:11:02.004048 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:11:02.004057 kernel: ACPI: bus type drm_connector registered May 15 12:11:02.004067 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:11:02.004078 kernel: loop: module loaded May 15 12:11:02.004088 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:11:02.004098 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 12:11:02.004108 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 12:11:02.004138 systemd-journald[1159]: Collecting audit messages is disabled. May 15 12:11:02.004167 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:11:02.004179 systemd[1]: verity-setup.service: Deactivated successfully. May 15 12:11:02.004237 systemd-journald[1159]: Journal started May 15 12:11:02.004260 systemd-journald[1159]: Runtime Journal (/run/log/journal/eca8410a27834011a22f1310c38b3083) is 6M, max 48.5M, 42.4M free. May 15 12:11:01.785829 systemd[1]: Queued start job for default target multi-user.target. May 15 12:11:01.797350 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 12:11:01.799462 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 12:11:02.005454 systemd[1]: Stopped verity-setup.service. May 15 12:11:02.010180 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:11:02.010854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 12:11:02.012019 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 12:11:02.013333 systemd[1]: Mounted media.mount - External Media Directory. May 15 12:11:02.014447 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 12:11:02.015689 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 12:11:02.016972 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 12:11:02.018237 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 12:11:02.021221 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:11:02.022711 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 12:11:02.022873 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 12:11:02.024389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:11:02.024559 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:11:02.025950 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:11:02.026118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:11:02.027485 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:11:02.027649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:11:02.029330 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 12:11:02.029490 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 12:11:02.030836 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:11:02.030992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:11:02.032510 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:11:02.033984 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:11:02.035563 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 12:11:02.038440 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 12:11:02.050279 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:11:02.052893 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 12:11:02.055100 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 12:11:02.056336 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 12:11:02.056373 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:11:02.058481 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 12:11:02.065407 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 12:11:02.066684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:11:02.068014 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 12:11:02.070472 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 12:11:02.072043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:11:02.074340 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 12:11:02.075657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:11:02.080381 systemd-journald[1159]: Time spent on flushing to /var/log/journal/eca8410a27834011a22f1310c38b3083 is 15.013ms for 879 entries. May 15 12:11:02.080381 systemd-journald[1159]: System Journal (/var/log/journal/eca8410a27834011a22f1310c38b3083) is 8M, max 195.6M, 187.6M free. May 15 12:11:02.123281 systemd-journald[1159]: Received client request to flush runtime journal. May 15 12:11:02.123346 kernel: loop0: detected capacity change from 0 to 138376 May 15 12:11:02.079690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:11:02.085345 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 12:11:02.087790 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 12:11:02.092227 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:11:02.093690 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 12:11:02.095010 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 12:11:02.101934 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 12:11:02.107722 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:11:02.111548 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 12:11:02.115942 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 12:11:02.128677 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 12:11:02.130402 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 12:11:02.137380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:11:02.144404 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 12:11:02.147350 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 12:11:02.151466 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 12:11:02.164530 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 15 12:11:02.164550 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 15 12:11:02.169050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:11:02.177266 kernel: loop1: detected capacity change from 0 to 107312 May 15 12:11:02.205222 kernel: loop2: detected capacity change from 0 to 189592 May 15 12:11:02.242435 kernel: loop3: detected capacity change from 0 to 138376 May 15 12:11:02.251573 kernel: loop4: detected capacity change from 0 to 107312 May 15 12:11:02.255246 kernel: loop5: detected capacity change from 0 to 189592 May 15 12:11:02.259505 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 12:11:02.259883 (sd-merge)[1223]: Merged extensions into '/usr'. May 15 12:11:02.265123 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... May 15 12:11:02.265140 systemd[1]: Reloading... May 15 12:11:02.326236 zram_generator::config[1249]: No configuration found. May 15 12:11:02.371887 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 12:11:02.402391 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:11:02.465003 systemd[1]: Reloading finished in 199 ms. May 15 12:11:02.483212 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 12:11:02.484772 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 12:11:02.499400 systemd[1]: Starting ensure-sysext.service... May 15 12:11:02.501127 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:11:02.510960 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... May 15 12:11:02.510975 systemd[1]: Reloading... May 15 12:11:02.519851 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 12:11:02.519888 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 12:11:02.520127 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 12:11:02.520345 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 12:11:02.520950 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 12:11:02.521156 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 15 12:11:02.521230 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 15 12:11:02.523859 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:11:02.523874 systemd-tmpfiles[1284]: Skipping /boot May 15 12:11:02.532721 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:11:02.532741 systemd-tmpfiles[1284]: Skipping /boot May 15 12:11:02.553246 zram_generator::config[1311]: No configuration found. May 15 12:11:02.621834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:11:02.684878 systemd[1]: Reloading finished in 173 ms. May 15 12:11:02.703737 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 12:11:02.710634 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:11:02.720338 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:11:02.722625 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 12:11:02.724823 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 12:11:02.727848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:11:02.731355 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:11:02.741677 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 12:11:02.747909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:11:02.758060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:11:02.762778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:11:02.766660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:11:02.768065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:11:02.768206 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:11:02.778874 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 12:11:02.782436 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 12:11:02.785131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:11:02.787019 systemd-udevd[1351]: Using default interface naming scheme 'v255'. May 15 12:11:02.787286 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:11:02.789211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:11:02.789383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:11:02.791258 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:11:02.791406 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:11:02.795309 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 12:11:02.800930 augenrules[1380]: No rules May 15 12:11:02.802492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:11:02.804040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:11:02.808521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:11:02.815100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:11:02.817338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:11:02.817514 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:11:02.819281 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 12:11:02.820397 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:11:02.822057 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:11:02.824160 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:11:02.824382 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:11:02.825946 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 12:11:02.828842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:11:02.830431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:11:02.832278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:11:02.832437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:11:02.834285 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 12:11:02.836090 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:11:02.843084 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:11:02.848500 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 12:11:02.865566 systemd[1]: Finished ensure-sysext.service. May 15 12:11:02.876689 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:11:02.878181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:11:02.881562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:11:02.895384 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:11:02.899340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:11:02.906342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:11:02.907557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:11:02.907614 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:11:02.910486 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:11:02.916014 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 12:11:02.917212 augenrules[1428]: /sbin/augenrules: No change May 15 12:11:02.917246 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:11:02.919989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:11:02.920231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:11:02.921625 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:11:02.921792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:11:02.923010 augenrules[1453]: No rules May 15 12:11:02.924879 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:11:02.925078 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:11:02.938495 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 12:11:02.952450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:11:02.954660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:11:02.967700 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:11:02.969240 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:11:02.989205 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 12:11:02.994360 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 12:11:02.995565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:11:02.995637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:11:03.019108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 12:11:03.044155 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 12:11:03.045596 systemd[1]: Reached target time-set.target - System Time Set. May 15 12:11:03.052847 systemd-resolved[1350]: Positive Trust Anchors: May 15 12:11:03.052867 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:11:03.052899 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:11:03.067114 systemd-resolved[1350]: Defaulting to hostname 'linux'. May 15 12:11:03.068485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:11:03.069772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:11:03.070990 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:11:03.072266 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 12:11:03.073580 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 12:11:03.073618 systemd-networkd[1440]: lo: Link UP May 15 12:11:03.073622 systemd-networkd[1440]: lo: Gained carrier May 15 12:11:03.074512 systemd-networkd[1440]: Enumeration completed May 15 12:11:03.075009 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:11:03.075016 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:11:03.075518 systemd-networkd[1440]: eth0: Link UP May 15 12:11:03.075570 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 12:11:03.075636 systemd-networkd[1440]: eth0: Gained carrier May 15 12:11:03.075649 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:11:03.076598 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 12:11:03.083999 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 12:11:03.085299 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 12:11:03.085341 systemd[1]: Reached target paths.target - Path Units. May 15 12:11:03.086024 systemd[1]: Reached target timers.target - Timer Units. May 15 12:11:03.088092 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 12:11:03.090664 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 12:11:03.094012 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 12:11:03.095474 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 12:11:03.097487 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 12:11:03.108306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 12:11:03.109836 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 12:11:03.111663 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:11:03.112251 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 12:11:03.112749 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. May 15 12:11:03.113322 systemd-timesyncd[1445]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 12:11:03.113376 systemd-timesyncd[1445]: Initial clock synchronization to Thu 2025-05-15 12:11:03.080572 UTC. May 15 12:11:03.113616 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 12:11:03.116974 systemd[1]: Reached target network.target - Network. May 15 12:11:03.118039 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:11:03.119137 systemd[1]: Reached target basic.target - Basic System. May 15 12:11:03.120265 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 12:11:03.120356 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 12:11:03.121459 systemd[1]: Starting containerd.service - containerd container runtime... May 15 12:11:03.123605 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 12:11:03.127412 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 12:11:03.131268 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 12:11:03.133233 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 12:11:03.134291 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 12:11:03.135358 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 12:11:03.139033 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 12:11:03.142430 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 12:11:03.145271 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 12:11:03.147700 jq[1494]: false May 15 12:11:03.155308 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 12:11:03.156440 extend-filesystems[1495]: Found loop3 May 15 12:11:03.156440 extend-filesystems[1495]: Found loop4 May 15 12:11:03.157833 extend-filesystems[1495]: Found loop5 May 15 12:11:03.157833 extend-filesystems[1495]: Found vda May 15 12:11:03.157833 extend-filesystems[1495]: Found vda1 May 15 12:11:03.157833 extend-filesystems[1495]: Found vda2 May 15 12:11:03.157833 extend-filesystems[1495]: Found vda3 May 15 12:11:03.157833 extend-filesystems[1495]: Found usr May 15 12:11:03.157833 extend-filesystems[1495]: Found vda4 May 15 12:11:03.157833 extend-filesystems[1495]: Found vda6 May 15 12:11:03.157833 extend-filesystems[1495]: Found vda7 May 15 12:11:03.157833 extend-filesystems[1495]: Found vda9 May 15 12:11:03.157833 extend-filesystems[1495]: Checking size of /dev/vda9 May 15 12:11:03.157403 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 12:11:03.163475 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 12:11:03.168159 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 12:11:03.174432 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 12:11:03.177432 systemd[1]: Starting update-engine.service - Update Engine... May 15 12:11:03.179277 extend-filesystems[1495]: Resized partition /dev/vda9 May 15 12:11:03.180256 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 12:11:03.187251 extend-filesystems[1517]: resize2fs 1.47.2 (1-Jan-2025) May 15 12:11:03.190418 jq[1518]: true May 15 12:11:03.190870 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 12:11:03.193001 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 12:11:03.194257 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 12:11:03.197684 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 12:11:03.194687 systemd[1]: motdgen.service: Deactivated successfully. May 15 12:11:03.197704 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 12:11:03.202671 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 12:11:03.202856 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 12:11:03.219419 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 12:11:03.235649 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 12:11:03.235649 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 12:11:03.235649 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 12:11:03.252367 jq[1521]: true May 15 12:11:03.252640 extend-filesystems[1495]: Resized filesystem in /dev/vda9 May 15 12:11:03.237515 (ntainerd)[1522]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 12:11:03.254118 update_engine[1514]: I20250515 12:11:03.239377 1514 main.cc:92] Flatcar Update Engine starting May 15 12:11:03.239355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:11:03.254116 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 12:11:03.256232 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 12:11:03.285312 dbus-daemon[1492]: [system] SELinux support is enabled May 15 12:11:03.285481 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 12:11:03.289095 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) May 15 12:11:03.289399 systemd-logind[1503]: New seat seat0. May 15 12:11:03.289965 tar[1520]: linux-arm64/helm May 15 12:11:03.290358 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 12:11:03.290402 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 12:11:03.292926 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 12:11:03.292944 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 12:11:03.295018 systemd[1]: Started systemd-logind.service - User Login Management. May 15 12:11:03.297989 update_engine[1514]: I20250515 12:11:03.297063 1514 update_check_scheduler.cc:74] Next update check in 5m41s May 15 12:11:03.299207 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 12:11:03.302373 systemd[1]: Started update-engine.service - Update Engine. May 15 12:11:03.305838 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 12:11:03.329375 bash[1552]: Updated "/home/core/.ssh/authorized_keys" May 15 12:11:03.333453 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 12:11:03.336908 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 12:11:03.355614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:11:03.383358 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 12:11:03.489427 containerd[1522]: time="2025-05-15T12:11:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 12:11:03.491350 containerd[1522]: time="2025-05-15T12:11:03.491166840Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 12:11:03.504290 containerd[1522]: time="2025-05-15T12:11:03.504180280Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.28µs" May 15 12:11:03.504290 containerd[1522]: time="2025-05-15T12:11:03.504283320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 12:11:03.504367 containerd[1522]: time="2025-05-15T12:11:03.504304760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504535640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504568440Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504592560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504642320Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504653680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504858880Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504871800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504882720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504890760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.504957080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 12:11:03.505239 containerd[1522]: time="2025-05-15T12:11:03.505136920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:11:03.505477 containerd[1522]: time="2025-05-15T12:11:03.505162000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:11:03.505477 containerd[1522]: time="2025-05-15T12:11:03.505171640Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 12:11:03.505713 containerd[1522]: time="2025-05-15T12:11:03.505687480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 12:11:03.506949 containerd[1522]: time="2025-05-15T12:11:03.506817400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 12:11:03.506949 containerd[1522]: time="2025-05-15T12:11:03.506930880Z" level=info msg="metadata content store policy set" policy=shared May 15 12:11:03.509914 containerd[1522]: time="2025-05-15T12:11:03.509858440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 12:11:03.509914 containerd[1522]: time="2025-05-15T12:11:03.509906160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 12:11:03.509914 containerd[1522]: time="2025-05-15T12:11:03.509919240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.509931000Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.509942640Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.509953280Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.509965120Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.509976040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.509987400Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.509997160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.510006160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 12:11:03.510015 containerd[1522]: time="2025-05-15T12:11:03.510017440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 12:11:03.510163 containerd[1522]: time="2025-05-15T12:11:03.510131480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 12:11:03.510163 containerd[1522]: time="2025-05-15T12:11:03.510151000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510164160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510175600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510218440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510234480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510246680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510278240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510297120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510315040Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 12:11:03.510376 containerd[1522]: time="2025-05-15T12:11:03.510325280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 12:11:03.510746 containerd[1522]: time="2025-05-15T12:11:03.510502960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 12:11:03.510746 containerd[1522]: time="2025-05-15T12:11:03.510517560Z" level=info msg="Start snapshots syncer" May 15 12:11:03.510746 containerd[1522]: time="2025-05-15T12:11:03.510545560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 12:11:03.510815 containerd[1522]: time="2025-05-15T12:11:03.510743320Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 12:11:03.510815 containerd[1522]: time="2025-05-15T12:11:03.510787880Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 12:11:03.510920 containerd[1522]: time="2025-05-15T12:11:03.510849040Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.510944000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.510983080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.510996240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.511006840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.511018440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.511033640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.511044120Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.511071520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.511086320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 12:11:03.511058 containerd[1522]: time="2025-05-15T12:11:03.511095960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511131680Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511144760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511153600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511162600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511169280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511177760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511214720Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511291640Z" level=info msg="runtime interface created" May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511296600Z" level=info msg="created NRI interface" May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511310760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511322280Z" level=info msg="Connect containerd service" May 15 12:11:03.511520 containerd[1522]: time="2025-05-15T12:11:03.511346600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 12:11:03.511988 containerd[1522]: time="2025-05-15T12:11:03.511960520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:11:03.626461 containerd[1522]: time="2025-05-15T12:11:03.626397880Z" level=info msg="Start subscribing containerd event" May 15 12:11:03.626461 containerd[1522]: time="2025-05-15T12:11:03.626470720Z" level=info msg="Start recovering state" May 15 12:11:03.626664 containerd[1522]: time="2025-05-15T12:11:03.626636120Z" level=info msg="Start event monitor" May 15 12:11:03.626664 containerd[1522]: time="2025-05-15T12:11:03.626660240Z" level=info msg="Start cni network conf syncer for default" May 15 12:11:03.626709 containerd[1522]: time="2025-05-15T12:11:03.626703600Z" level=info msg="Start streaming server" May 15 12:11:03.626743 containerd[1522]: time="2025-05-15T12:11:03.626712920Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 12:11:03.626743 containerd[1522]: time="2025-05-15T12:11:03.626720240Z" level=info msg="runtime interface starting up..." May 15 12:11:03.626743 containerd[1522]: time="2025-05-15T12:11:03.626725800Z" level=info msg="starting plugins..." May 15 12:11:03.626743 containerd[1522]: time="2025-05-15T12:11:03.626740440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 12:11:03.627585 containerd[1522]: time="2025-05-15T12:11:03.627561600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 12:11:03.627776 containerd[1522]: time="2025-05-15T12:11:03.627756600Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 12:11:03.631504 systemd[1]: Started containerd.service - containerd container runtime. May 15 12:11:03.632849 containerd[1522]: time="2025-05-15T12:11:03.632695760Z" level=info msg="containerd successfully booted in 0.143692s" May 15 12:11:03.655466 tar[1520]: linux-arm64/LICENSE May 15 12:11:03.655618 tar[1520]: linux-arm64/README.md May 15 12:11:03.672601 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 12:11:04.021921 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 12:11:04.041618 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 12:11:04.044579 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 12:11:04.067630 systemd[1]: issuegen.service: Deactivated successfully. May 15 12:11:04.067867 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 12:11:04.070739 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 12:11:04.103119 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 12:11:04.106978 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 12:11:04.109425 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 12:11:04.110846 systemd[1]: Reached target getty.target - Login Prompts. May 15 12:11:04.928352 systemd-networkd[1440]: eth0: Gained IPv6LL May 15 12:11:04.931162 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 12:11:04.932923 systemd[1]: Reached target network-online.target - Network is Online. May 15 12:11:04.935499 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 12:11:04.937835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:11:04.952130 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 12:11:04.967788 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 12:11:04.968136 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 12:11:04.970022 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 12:11:04.980816 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 12:11:05.441303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:11:05.442873 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 12:11:05.445649 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:11:05.447267 systemd[1]: Startup finished in 2.116s (kernel) + 4.734s (initrd) + 4.130s (userspace) = 10.982s. May 15 12:11:05.885744 kubelet[1628]: E0515 12:11:05.885609 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:11:05.887960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:11:05.888092 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:11:05.888676 systemd[1]: kubelet.service: Consumed 788ms CPU time, 232.1M memory peak. May 15 12:11:09.878672 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 12:11:09.879882 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:36344.service - OpenSSH per-connection server daemon (10.0.0.1:36344). May 15 12:11:09.962615 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 36344 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:11:09.963225 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:11:09.969723 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 12:11:09.973868 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 12:11:09.979723 systemd-logind[1503]: New session 1 of user core. May 15 12:11:09.997142 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 12:11:09.999978 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 12:11:10.005594 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 12:11:10.007615 systemd-logind[1503]: New session c1 of user core. May 15 12:11:10.123852 systemd[1645]: Queued start job for default target default.target. May 15 12:11:10.146136 systemd[1645]: Created slice app.slice - User Application Slice. May 15 12:11:10.146166 systemd[1645]: Reached target paths.target - Paths. May 15 12:11:10.146229 systemd[1645]: Reached target timers.target - Timers. May 15 12:11:10.147513 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 12:11:10.156149 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 12:11:10.156229 systemd[1645]: Reached target sockets.target - Sockets. May 15 12:11:10.156268 systemd[1645]: Reached target basic.target - Basic System. May 15 12:11:10.156294 systemd[1645]: Reached target default.target - Main User Target. May 15 12:11:10.156320 systemd[1645]: Startup finished in 143ms. May 15 12:11:10.156515 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 12:11:10.157925 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 12:11:10.218296 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:36358.service - OpenSSH per-connection server daemon (10.0.0.1:36358). May 15 12:11:10.272615 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 36358 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:11:10.273707 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:11:10.277498 systemd-logind[1503]: New session 2 of user core. May 15 12:11:10.294353 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 12:11:10.345544 sshd[1658]: Connection closed by 10.0.0.1 port 36358 May 15 12:11:10.345725 sshd-session[1656]: pam_unix(sshd:session): session closed for user core May 15 12:11:10.363126 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:36358.service: Deactivated successfully. May 15 12:11:10.366564 systemd[1]: session-2.scope: Deactivated successfully. May 15 12:11:10.367305 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. May 15 12:11:10.370354 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:36364.service - OpenSSH per-connection server daemon (10.0.0.1:36364). May 15 12:11:10.370962 systemd-logind[1503]: Removed session 2. May 15 12:11:10.418929 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 36364 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:11:10.419989 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:11:10.424245 systemd-logind[1503]: New session 3 of user core. May 15 12:11:10.431350 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 12:11:10.478998 sshd[1666]: Connection closed by 10.0.0.1 port 36364 May 15 12:11:10.479228 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 15 12:11:10.494128 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:36364.service: Deactivated successfully. May 15 12:11:10.495760 systemd[1]: session-3.scope: Deactivated successfully. May 15 12:11:10.496478 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. May 15 12:11:10.499107 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:36380.service - OpenSSH per-connection server daemon (10.0.0.1:36380). May 15 12:11:10.499750 systemd-logind[1503]: Removed session 3. May 15 12:11:10.549948 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 36380 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:11:10.551422 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:11:10.555433 systemd-logind[1503]: New session 4 of user core. May 15 12:11:10.564341 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 12:11:10.616058 sshd[1674]: Connection closed by 10.0.0.1 port 36380 May 15 12:11:10.616351 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 15 12:11:10.632001 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:36380.service: Deactivated successfully. May 15 12:11:10.633751 systemd[1]: session-4.scope: Deactivated successfully. May 15 12:11:10.634413 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. May 15 12:11:10.637344 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:36394.service - OpenSSH per-connection server daemon (10.0.0.1:36394). May 15 12:11:10.639270 systemd-logind[1503]: Removed session 4. May 15 12:11:10.685345 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 36394 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:11:10.686524 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:11:10.691023 systemd-logind[1503]: New session 5 of user core. May 15 12:11:10.703342 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 12:11:10.764150 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 12:11:10.764425 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:11:10.775684 sudo[1683]: pam_unix(sudo:session): session closed for user root May 15 12:11:10.777023 sshd[1682]: Connection closed by 10.0.0.1 port 36394 May 15 12:11:10.777621 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 15 12:11:10.790084 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:36394.service: Deactivated successfully. May 15 12:11:10.793400 systemd[1]: session-5.scope: Deactivated successfully. May 15 12:11:10.794873 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. May 15 12:11:10.797101 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:36402.service - OpenSSH per-connection server daemon (10.0.0.1:36402). May 15 12:11:10.797640 systemd-logind[1503]: Removed session 5. May 15 12:11:10.850490 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 36402 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:11:10.851660 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:11:10.855812 systemd-logind[1503]: New session 6 of user core. May 15 12:11:10.865380 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 12:11:10.915438 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 12:11:10.915997 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:11:10.987151 sudo[1693]: pam_unix(sudo:session): session closed for user root May 15 12:11:10.992254 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 12:11:10.992510 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:11:11.000540 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:11:11.031664 augenrules[1715]: No rules May 15 12:11:11.032846 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:11:11.033071 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:11:11.033881 sudo[1692]: pam_unix(sudo:session): session closed for user root May 15 12:11:11.035256 sshd[1691]: Connection closed by 10.0.0.1 port 36402 May 15 12:11:11.035295 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 15 12:11:11.044182 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:36402.service: Deactivated successfully. May 15 12:11:11.045633 systemd[1]: session-6.scope: Deactivated successfully. May 15 12:11:11.046286 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. May 15 12:11:11.048643 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:36408.service - OpenSSH per-connection server daemon (10.0.0.1:36408). May 15 12:11:11.049258 systemd-logind[1503]: Removed session 6. May 15 12:11:11.102990 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 36408 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:11:11.104280 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:11:11.107997 systemd-logind[1503]: New session 7 of user core. May 15 12:11:11.117861 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 12:11:11.167613 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 12:11:11.167875 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:11:11.532333 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 12:11:11.549464 (dockerd)[1748]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 12:11:11.817276 dockerd[1748]: time="2025-05-15T12:11:11.816650452Z" level=info msg="Starting up" May 15 12:11:11.818361 dockerd[1748]: time="2025-05-15T12:11:11.818326879Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 12:11:11.840173 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1970304625-merged.mount: Deactivated successfully. May 15 12:11:11.850387 systemd[1]: var-lib-docker-metacopy\x2dcheck1163818676-merged.mount: Deactivated successfully. May 15 12:11:11.860516 dockerd[1748]: time="2025-05-15T12:11:11.860473663Z" level=info msg="Loading containers: start." May 15 12:11:11.869250 kernel: Initializing XFRM netlink socket May 15 12:11:12.080220 systemd-networkd[1440]: docker0: Link UP May 15 12:11:12.083687 dockerd[1748]: time="2025-05-15T12:11:12.083579566Z" level=info msg="Loading containers: done." May 15 12:11:12.101575 dockerd[1748]: time="2025-05-15T12:11:12.101529094Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 12:11:12.101691 dockerd[1748]: time="2025-05-15T12:11:12.101610669Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 12:11:12.101715 dockerd[1748]: time="2025-05-15T12:11:12.101704875Z" level=info msg="Initializing buildkit" May 15 12:11:12.121557 dockerd[1748]: time="2025-05-15T12:11:12.121510851Z" level=info msg="Completed buildkit initialization" May 15 12:11:12.128693 dockerd[1748]: time="2025-05-15T12:11:12.128651309Z" level=info msg="Daemon has completed initialization" May 15 12:11:12.128792 dockerd[1748]: time="2025-05-15T12:11:12.128719575Z" level=info msg="API listen on /run/docker.sock" May 15 12:11:12.128912 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 12:11:12.774247 containerd[1522]: time="2025-05-15T12:11:12.774204426Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 12:11:12.838385 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3196162173-merged.mount: Deactivated successfully. May 15 12:11:13.388436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183153191.mount: Deactivated successfully. May 15 12:11:14.481331 containerd[1522]: time="2025-05-15T12:11:14.481257416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:14.481809 containerd[1522]: time="2025-05-15T12:11:14.481779708Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 15 12:11:14.483025 containerd[1522]: time="2025-05-15T12:11:14.482989767Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:14.485346 containerd[1522]: time="2025-05-15T12:11:14.485309681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:14.486388 containerd[1522]: time="2025-05-15T12:11:14.486356662Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.712106992s" May 15 12:11:14.486495 containerd[1522]: time="2025-05-15T12:11:14.486480290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 15 12:11:14.487451 containerd[1522]: time="2025-05-15T12:11:14.487424747Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 12:11:15.604823 containerd[1522]: time="2025-05-15T12:11:15.604769978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:15.605478 containerd[1522]: time="2025-05-15T12:11:15.605437936Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 15 12:11:15.606050 containerd[1522]: time="2025-05-15T12:11:15.606017998Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:15.608828 containerd[1522]: time="2025-05-15T12:11:15.608793518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:15.610030 containerd[1522]: time="2025-05-15T12:11:15.609995691Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.122538128s" May 15 12:11:15.610072 containerd[1522]: time="2025-05-15T12:11:15.610031585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 15 12:11:15.610518 containerd[1522]: time="2025-05-15T12:11:15.610494612Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 12:11:16.138471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 12:11:16.139846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:11:16.267666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:11:16.271110 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:11:16.309544 kubelet[2026]: E0515 12:11:16.309470 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:11:16.312906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:11:16.313039 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:11:16.314334 systemd[1]: kubelet.service: Consumed 129ms CPU time, 94.6M memory peak. May 15 12:11:16.736727 containerd[1522]: time="2025-05-15T12:11:16.736684583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:16.737056 containerd[1522]: time="2025-05-15T12:11:16.736991488Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 15 12:11:16.737870 containerd[1522]: time="2025-05-15T12:11:16.737837058Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:16.740987 containerd[1522]: time="2025-05-15T12:11:16.740941610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:16.741769 containerd[1522]: time="2025-05-15T12:11:16.741747328Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.131220299s" May 15 12:11:16.741820 containerd[1522]: time="2025-05-15T12:11:16.741774908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 15 12:11:16.742485 containerd[1522]: time="2025-05-15T12:11:16.742460110Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 12:11:17.650322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982941821.mount: Deactivated successfully. May 15 12:11:17.852272 containerd[1522]: time="2025-05-15T12:11:17.852218210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:17.852670 containerd[1522]: time="2025-05-15T12:11:17.852567374Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 15 12:11:17.853415 containerd[1522]: time="2025-05-15T12:11:17.853384502Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:17.854913 containerd[1522]: time="2025-05-15T12:11:17.854875293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:17.855289 containerd[1522]: time="2025-05-15T12:11:17.855266948Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.112774181s" May 15 12:11:17.855344 containerd[1522]: time="2025-05-15T12:11:17.855294450Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 15 12:11:17.856124 containerd[1522]: time="2025-05-15T12:11:17.855996375Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 12:11:18.360497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769318956.mount: Deactivated successfully. May 15 12:11:19.112831 containerd[1522]: time="2025-05-15T12:11:19.112781185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:19.113368 containerd[1522]: time="2025-05-15T12:11:19.113336073Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 15 12:11:19.114131 containerd[1522]: time="2025-05-15T12:11:19.114108623Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:19.116578 containerd[1522]: time="2025-05-15T12:11:19.116526048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:19.117904 containerd[1522]: time="2025-05-15T12:11:19.117869835Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.261840363s" May 15 12:11:19.117904 containerd[1522]: time="2025-05-15T12:11:19.117902894Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 12:11:19.118379 containerd[1522]: time="2025-05-15T12:11:19.118310475Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 12:11:19.599958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239169061.mount: Deactivated successfully. May 15 12:11:19.604333 containerd[1522]: time="2025-05-15T12:11:19.604267003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:11:19.605243 containerd[1522]: time="2025-05-15T12:11:19.605220278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 12:11:19.605901 containerd[1522]: time="2025-05-15T12:11:19.605874583Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:11:19.608379 containerd[1522]: time="2025-05-15T12:11:19.608343615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:11:19.609598 containerd[1522]: time="2025-05-15T12:11:19.609569277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 491.020714ms" May 15 12:11:19.609633 containerd[1522]: time="2025-05-15T12:11:19.609599658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 12:11:19.610056 containerd[1522]: time="2025-05-15T12:11:19.610033902Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 12:11:20.068340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369084876.mount: Deactivated successfully. May 15 12:11:21.687039 containerd[1522]: time="2025-05-15T12:11:21.686994511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:21.687979 containerd[1522]: time="2025-05-15T12:11:21.687953900Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 15 12:11:21.688695 containerd[1522]: time="2025-05-15T12:11:21.688640171Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:21.691739 containerd[1522]: time="2025-05-15T12:11:21.691705785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:21.693490 containerd[1522]: time="2025-05-15T12:11:21.693365716Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.083304071s" May 15 12:11:21.693490 containerd[1522]: time="2025-05-15T12:11:21.693397897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 12:11:26.423882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 12:11:26.425487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:11:26.438703 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 12:11:26.438774 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 12:11:26.439003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:11:26.440969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:11:26.464751 systemd[1]: Reload requested from client PID 2179 ('systemctl') (unit session-7.scope)... May 15 12:11:26.464770 systemd[1]: Reloading... May 15 12:11:26.530283 zram_generator::config[2225]: No configuration found. May 15 12:11:26.633482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:11:26.720891 systemd[1]: Reloading finished in 255 ms. May 15 12:11:26.777528 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 12:11:26.777647 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 12:11:26.777875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:11:26.777932 systemd[1]: kubelet.service: Consumed 78ms CPU time, 82.4M memory peak. May 15 12:11:26.783519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:11:26.901646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:11:26.905365 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:11:26.941049 kubelet[2267]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:11:26.941049 kubelet[2267]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 12:11:26.941049 kubelet[2267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:11:26.941430 kubelet[2267]: I0515 12:11:26.941066 2267 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:11:27.277940 kubelet[2267]: I0515 12:11:27.277897 2267 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 12:11:27.277940 kubelet[2267]: I0515 12:11:27.277930 2267 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:11:27.278195 kubelet[2267]: I0515 12:11:27.278162 2267 server.go:929] "Client rotation is on, will bootstrap in background" May 15 12:11:27.320090 kubelet[2267]: E0515 12:11:27.320043 2267 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 15 12:11:27.322716 kubelet[2267]: I0515 12:11:27.322692 2267 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:11:27.331729 kubelet[2267]: I0515 12:11:27.331593 2267 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:11:27.334953 kubelet[2267]: I0515 12:11:27.334924 2267 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:11:27.335768 kubelet[2267]: I0515 12:11:27.335735 2267 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 12:11:27.335898 kubelet[2267]: I0515 12:11:27.335871 2267 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:11:27.336052 kubelet[2267]: I0515 12:11:27.335893 2267 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:11:27.336210 kubelet[2267]: I0515 12:11:27.336181 2267 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:11:27.336245 kubelet[2267]: I0515 12:11:27.336212 2267 container_manager_linux.go:300] "Creating device plugin manager" May 15 12:11:27.336393 kubelet[2267]: I0515 12:11:27.336375 2267 state_mem.go:36] "Initialized new in-memory state store" May 15 12:11:27.338166 kubelet[2267]: I0515 12:11:27.337932 2267 kubelet.go:408] "Attempting to sync node with API server" May 15 12:11:27.338166 kubelet[2267]: I0515 12:11:27.337957 2267 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:11:27.338166 kubelet[2267]: I0515 12:11:27.338054 2267 kubelet.go:314] "Adding apiserver pod source" May 15 12:11:27.338166 kubelet[2267]: I0515 12:11:27.338063 2267 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:11:27.340841 kubelet[2267]: W0515 12:11:27.340727 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 15 12:11:27.340841 kubelet[2267]: E0515 12:11:27.340792 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 15 12:11:27.340961 kubelet[2267]: W0515 12:11:27.340917 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 15 12:11:27.341029 kubelet[2267]: E0515 12:11:27.341001 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 15 12:11:27.342231 kubelet[2267]: I0515 12:11:27.342027 2267 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:11:27.343695 kubelet[2267]: I0515 12:11:27.343677 2267 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:11:27.346284 kubelet[2267]: W0515 12:11:27.346265 2267 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 12:11:27.347011 kubelet[2267]: I0515 12:11:27.346991 2267 server.go:1269] "Started kubelet" May 15 12:11:27.347872 kubelet[2267]: I0515 12:11:27.347812 2267 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:11:27.347917 kubelet[2267]: I0515 12:11:27.347872 2267 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:11:27.348158 kubelet[2267]: I0515 12:11:27.348134 2267 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:11:27.348893 kubelet[2267]: I0515 12:11:27.348865 2267 server.go:460] "Adding debug handlers to kubelet server" May 15 12:11:27.349126 kubelet[2267]: I0515 12:11:27.349113 2267 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:11:27.349608 kubelet[2267]: I0515 12:11:27.349588 2267 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:11:27.350415 kubelet[2267]: I0515 12:11:27.350396 2267 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 12:11:27.350567 kubelet[2267]: I0515 12:11:27.350543 2267 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 12:11:27.350630 kubelet[2267]: I0515 12:11:27.350610 2267 reconciler.go:26] "Reconciler: start to sync state" May 15 12:11:27.350728 kubelet[2267]: E0515 12:11:27.349803 2267 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fb22d216ae619 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 12:11:27.346968089 +0000 UTC m=+0.438804625,LastTimestamp:2025-05-15 12:11:27.346968089 +0000 UTC m=+0.438804625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 12:11:27.351406 kubelet[2267]: W0515 12:11:27.351033 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 15 12:11:27.351406 kubelet[2267]: E0515 12:11:27.351073 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 15 12:11:27.351406 kubelet[2267]: E0515 12:11:27.351388 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:11:27.351479 kubelet[2267]: E0515 12:11:27.351403 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" May 15 12:11:27.351555 kubelet[2267]: E0515 12:11:27.351528 2267 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:11:27.352890 kubelet[2267]: I0515 12:11:27.352870 2267 factory.go:221] Registration of the containerd container factory successfully May 15 12:11:27.352890 kubelet[2267]: I0515 12:11:27.352885 2267 factory.go:221] Registration of the systemd container factory successfully May 15 12:11:27.353066 kubelet[2267]: I0515 12:11:27.352941 2267 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:11:27.365096 kubelet[2267]: I0515 12:11:27.365073 2267 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 12:11:27.365096 kubelet[2267]: I0515 12:11:27.365089 2267 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 12:11:27.365204 kubelet[2267]: I0515 12:11:27.365103 2267 state_mem.go:36] "Initialized new in-memory state store" May 15 12:11:27.366517 kubelet[2267]: I0515 12:11:27.366392 2267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:11:27.367336 kubelet[2267]: I0515 12:11:27.367319 2267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:11:27.367408 kubelet[2267]: I0515 12:11:27.367400 2267 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 12:11:27.367575 kubelet[2267]: I0515 12:11:27.367563 2267 kubelet.go:2321] "Starting kubelet main sync loop" May 15 12:11:27.367994 kubelet[2267]: E0515 12:11:27.367660 2267 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:11:27.368542 kubelet[2267]: W0515 12:11:27.368489 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 15 12:11:27.368542 kubelet[2267]: E0515 12:11:27.368528 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 15 12:11:27.451853 kubelet[2267]: E0515 12:11:27.451806 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:11:27.467938 kubelet[2267]: E0515 12:11:27.467905 2267 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 12:11:27.504635 kubelet[2267]: I0515 12:11:27.504604 2267 policy_none.go:49] "None policy: Start" May 15 12:11:27.505285 kubelet[2267]: I0515 12:11:27.505268 2267 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 12:11:27.505350 kubelet[2267]: I0515 12:11:27.505295 2267 state_mem.go:35] "Initializing new in-memory state store" May 15 12:11:27.513707 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 12:11:27.528025 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 12:11:27.531559 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 12:11:27.548102 kubelet[2267]: I0515 12:11:27.548051 2267 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:11:27.548311 kubelet[2267]: I0515 12:11:27.548280 2267 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:11:27.548375 kubelet[2267]: I0515 12:11:27.548298 2267 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:11:27.548528 kubelet[2267]: I0515 12:11:27.548509 2267 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:11:27.550484 kubelet[2267]: E0515 12:11:27.550409 2267 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 12:11:27.552055 kubelet[2267]: E0515 12:11:27.552021 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" May 15 12:11:27.650221 kubelet[2267]: I0515 12:11:27.650178 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:11:27.650592 kubelet[2267]: E0515 12:11:27.650570 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" May 15 12:11:27.676010 systemd[1]: Created slice kubepods-burstable-podfa4bce6de12938152b4ec5063b72892d.slice - libcontainer container kubepods-burstable-podfa4bce6de12938152b4ec5063b72892d.slice. May 15 12:11:27.692356 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 12:11:27.696112 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 12:11:27.751894 kubelet[2267]: I0515 12:11:27.751847 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:27.751894 kubelet[2267]: I0515 12:11:27.751888 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:27.752030 kubelet[2267]: I0515 12:11:27.751908 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:27.752030 kubelet[2267]: I0515 12:11:27.751927 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 12:11:27.752030 kubelet[2267]: I0515 12:11:27.751944 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:27.752030 kubelet[2267]: I0515 12:11:27.751959 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:27.752030 kubelet[2267]: I0515 12:11:27.751973 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa4bce6de12938152b4ec5063b72892d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fa4bce6de12938152b4ec5063b72892d\") " pod="kube-system/kube-apiserver-localhost" May 15 12:11:27.752128 kubelet[2267]: I0515 12:11:27.751987 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa4bce6de12938152b4ec5063b72892d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fa4bce6de12938152b4ec5063b72892d\") " pod="kube-system/kube-apiserver-localhost" May 15 12:11:27.752128 kubelet[2267]: I0515 12:11:27.752004 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa4bce6de12938152b4ec5063b72892d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fa4bce6de12938152b4ec5063b72892d\") " pod="kube-system/kube-apiserver-localhost" May 15 12:11:27.852836 kubelet[2267]: I0515 12:11:27.852726 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:11:27.853749 kubelet[2267]: E0515 12:11:27.853723 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" May 15 12:11:27.953328 kubelet[2267]: E0515 12:11:27.953293 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" May 15 12:11:27.991149 containerd[1522]: time="2025-05-15T12:11:27.991105126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fa4bce6de12938152b4ec5063b72892d,Namespace:kube-system,Attempt:0,}" May 15 12:11:27.995756 containerd[1522]: time="2025-05-15T12:11:27.995692267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 12:11:27.999781 containerd[1522]: time="2025-05-15T12:11:27.999747311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 12:11:28.024335 containerd[1522]: time="2025-05-15T12:11:28.023870024Z" level=info msg="connecting to shim 2375f5fa1fe496f3a3e8a543c0eff9bb52780a705fb4d36b6973dc79276b0749" address="unix:///run/containerd/s/c39bccf07bbaf6dfab9d28c3b5da3df1a7f793a4bd03b4a470884052a10e9883" namespace=k8s.io protocol=ttrpc version=3 May 15 12:11:28.025954 containerd[1522]: time="2025-05-15T12:11:28.025879505Z" level=info msg="connecting to shim 9af1bc821966111c23b98995b9945504c3c9aff7e621e7fdf77968a7fbce3dfc" address="unix:///run/containerd/s/1ad87a7fa4ffdd442f84db6d12801d65f76d169ee363031324aecd06bfe1eb34" namespace=k8s.io protocol=ttrpc version=3 May 15 12:11:28.041036 containerd[1522]: time="2025-05-15T12:11:28.040982782Z" level=info msg="connecting to shim 4c082f94f122f9777309be590966bc3999e799e25768f29710e793d8654c8688" address="unix:///run/containerd/s/bb2f7423248eb3d1c527cdd04597d6beb9e943cbb5a6bd2dba442e9ddee9e65b" namespace=k8s.io protocol=ttrpc version=3 May 15 12:11:28.048346 systemd[1]: Started cri-containerd-2375f5fa1fe496f3a3e8a543c0eff9bb52780a705fb4d36b6973dc79276b0749.scope - libcontainer container 2375f5fa1fe496f3a3e8a543c0eff9bb52780a705fb4d36b6973dc79276b0749. May 15 12:11:28.051474 systemd[1]: Started cri-containerd-9af1bc821966111c23b98995b9945504c3c9aff7e621e7fdf77968a7fbce3dfc.scope - libcontainer container 9af1bc821966111c23b98995b9945504c3c9aff7e621e7fdf77968a7fbce3dfc. May 15 12:11:28.073328 systemd[1]: Started cri-containerd-4c082f94f122f9777309be590966bc3999e799e25768f29710e793d8654c8688.scope - libcontainer container 4c082f94f122f9777309be590966bc3999e799e25768f29710e793d8654c8688. May 15 12:11:28.087818 containerd[1522]: time="2025-05-15T12:11:28.087762511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fa4bce6de12938152b4ec5063b72892d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2375f5fa1fe496f3a3e8a543c0eff9bb52780a705fb4d36b6973dc79276b0749\"" May 15 12:11:28.094555 containerd[1522]: time="2025-05-15T12:11:28.094515291Z" level=info msg="CreateContainer within sandbox \"2375f5fa1fe496f3a3e8a543c0eff9bb52780a705fb4d36b6973dc79276b0749\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 12:11:28.094660 containerd[1522]: time="2025-05-15T12:11:28.094617242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9af1bc821966111c23b98995b9945504c3c9aff7e621e7fdf77968a7fbce3dfc\"" May 15 12:11:28.097681 containerd[1522]: time="2025-05-15T12:11:28.097639761Z" level=info msg="CreateContainer within sandbox \"9af1bc821966111c23b98995b9945504c3c9aff7e621e7fdf77968a7fbce3dfc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 12:11:28.106481 containerd[1522]: time="2025-05-15T12:11:28.105973466Z" level=info msg="Container eae712992bdb6c6964ed9f317ab77942fec11ea3ade9205ce994d0690ba90832: CDI devices from CRI Config.CDIDevices: []" May 15 12:11:28.107500 containerd[1522]: time="2025-05-15T12:11:28.107474150Z" level=info msg="Container 3b1d00f0a1d144149998972f9babbf0e09067d6370e3961c4ae7203f740d6071: CDI devices from CRI Config.CDIDevices: []" May 15 12:11:28.116929 containerd[1522]: time="2025-05-15T12:11:28.116871708Z" level=info msg="CreateContainer within sandbox \"2375f5fa1fe496f3a3e8a543c0eff9bb52780a705fb4d36b6973dc79276b0749\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eae712992bdb6c6964ed9f317ab77942fec11ea3ade9205ce994d0690ba90832\"" May 15 12:11:28.117554 containerd[1522]: time="2025-05-15T12:11:28.117502248Z" level=info msg="StartContainer for \"eae712992bdb6c6964ed9f317ab77942fec11ea3ade9205ce994d0690ba90832\"" May 15 12:11:28.118341 containerd[1522]: time="2025-05-15T12:11:28.118306984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c082f94f122f9777309be590966bc3999e799e25768f29710e793d8654c8688\"" May 15 12:11:28.118630 containerd[1522]: time="2025-05-15T12:11:28.118479941Z" level=info msg="CreateContainer within sandbox \"9af1bc821966111c23b98995b9945504c3c9aff7e621e7fdf77968a7fbce3dfc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b1d00f0a1d144149998972f9babbf0e09067d6370e3961c4ae7203f740d6071\"" May 15 12:11:28.118630 containerd[1522]: time="2025-05-15T12:11:28.118572777Z" level=info msg="connecting to shim eae712992bdb6c6964ed9f317ab77942fec11ea3ade9205ce994d0690ba90832" address="unix:///run/containerd/s/c39bccf07bbaf6dfab9d28c3b5da3df1a7f793a4bd03b4a470884052a10e9883" protocol=ttrpc version=3 May 15 12:11:28.119291 containerd[1522]: time="2025-05-15T12:11:28.119216390Z" level=info msg="StartContainer for \"3b1d00f0a1d144149998972f9babbf0e09067d6370e3961c4ae7203f740d6071\"" May 15 12:11:28.120279 containerd[1522]: time="2025-05-15T12:11:28.120252976Z" level=info msg="connecting to shim 3b1d00f0a1d144149998972f9babbf0e09067d6370e3961c4ae7203f740d6071" address="unix:///run/containerd/s/1ad87a7fa4ffdd442f84db6d12801d65f76d169ee363031324aecd06bfe1eb34" protocol=ttrpc version=3 May 15 12:11:28.121674 containerd[1522]: time="2025-05-15T12:11:28.121648550Z" level=info msg="CreateContainer within sandbox \"4c082f94f122f9777309be590966bc3999e799e25768f29710e793d8654c8688\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 12:11:28.130205 containerd[1522]: time="2025-05-15T12:11:28.129563095Z" level=info msg="Container e53ec10cf8bdd3898261b44f7f4ff465b28ab6536750f8dd0b921f24ae269048: CDI devices from CRI Config.CDIDevices: []" May 15 12:11:28.136305 containerd[1522]: time="2025-05-15T12:11:28.136264459Z" level=info msg="CreateContainer within sandbox \"4c082f94f122f9777309be590966bc3999e799e25768f29710e793d8654c8688\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e53ec10cf8bdd3898261b44f7f4ff465b28ab6536750f8dd0b921f24ae269048\"" May 15 12:11:28.136811 containerd[1522]: time="2025-05-15T12:11:28.136786770Z" level=info msg="StartContainer for \"e53ec10cf8bdd3898261b44f7f4ff465b28ab6536750f8dd0b921f24ae269048\"" May 15 12:11:28.137346 systemd[1]: Started cri-containerd-eae712992bdb6c6964ed9f317ab77942fec11ea3ade9205ce994d0690ba90832.scope - libcontainer container eae712992bdb6c6964ed9f317ab77942fec11ea3ade9205ce994d0690ba90832. May 15 12:11:28.138494 containerd[1522]: time="2025-05-15T12:11:28.138467689Z" level=info msg="connecting to shim e53ec10cf8bdd3898261b44f7f4ff465b28ab6536750f8dd0b921f24ae269048" address="unix:///run/containerd/s/bb2f7423248eb3d1c527cdd04597d6beb9e943cbb5a6bd2dba442e9ddee9e65b" protocol=ttrpc version=3 May 15 12:11:28.140805 systemd[1]: Started cri-containerd-3b1d00f0a1d144149998972f9babbf0e09067d6370e3961c4ae7203f740d6071.scope - libcontainer container 3b1d00f0a1d144149998972f9babbf0e09067d6370e3961c4ae7203f740d6071. May 15 12:11:28.159370 kubelet[2267]: W0515 12:11:28.159140 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 15 12:11:28.159370 kubelet[2267]: E0515 12:11:28.159304 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 15 12:11:28.159463 systemd[1]: Started cri-containerd-e53ec10cf8bdd3898261b44f7f4ff465b28ab6536750f8dd0b921f24ae269048.scope - libcontainer container e53ec10cf8bdd3898261b44f7f4ff465b28ab6536750f8dd0b921f24ae269048. May 15 12:11:28.181366 containerd[1522]: time="2025-05-15T12:11:28.181324729Z" level=info msg="StartContainer for \"eae712992bdb6c6964ed9f317ab77942fec11ea3ade9205ce994d0690ba90832\" returns successfully" May 15 12:11:28.207153 containerd[1522]: time="2025-05-15T12:11:28.206994286Z" level=info msg="StartContainer for \"3b1d00f0a1d144149998972f9babbf0e09067d6370e3961c4ae7203f740d6071\" returns successfully" May 15 12:11:28.224462 containerd[1522]: time="2025-05-15T12:11:28.224399905Z" level=info msg="StartContainer for \"e53ec10cf8bdd3898261b44f7f4ff465b28ab6536750f8dd0b921f24ae269048\" returns successfully" May 15 12:11:28.255465 kubelet[2267]: I0515 12:11:28.255434 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:11:28.255976 kubelet[2267]: E0515 12:11:28.255946 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" May 15 12:11:29.057861 kubelet[2267]: I0515 12:11:29.057806 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:11:29.976629 kubelet[2267]: E0515 12:11:29.976552 2267 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 12:11:30.025806 kubelet[2267]: E0515 12:11:30.025621 2267 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183fb22d216ae619 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 12:11:27.346968089 +0000 UTC m=+0.438804625,LastTimestamp:2025-05-15 12:11:27.346968089 +0000 UTC m=+0.438804625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 12:11:30.080900 kubelet[2267]: E0515 12:11:30.080790 2267 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183fb22d21b05901 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 12:11:27.351519489 +0000 UTC m=+0.443356025,LastTimestamp:2025-05-15 12:11:27.351519489 +0000 UTC m=+0.443356025,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 12:11:30.102283 kubelet[2267]: I0515 12:11:30.102248 2267 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 12:11:30.339664 kubelet[2267]: I0515 12:11:30.339358 2267 apiserver.go:52] "Watching apiserver" May 15 12:11:30.350984 kubelet[2267]: I0515 12:11:30.350929 2267 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 12:11:31.863913 systemd[1]: Reload requested from client PID 2540 ('systemctl') (unit session-7.scope)... May 15 12:11:31.863936 systemd[1]: Reloading... May 15 12:11:31.932292 zram_generator::config[2583]: No configuration found. May 15 12:11:32.003509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:11:32.103263 systemd[1]: Reloading finished in 239 ms. May 15 12:11:32.138663 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:11:32.150284 systemd[1]: kubelet.service: Deactivated successfully. May 15 12:11:32.151255 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:11:32.151317 systemd[1]: kubelet.service: Consumed 820ms CPU time, 116.1M memory peak. May 15 12:11:32.153099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:11:32.276232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:11:32.280642 (kubelet)[2625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:11:32.321062 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:11:32.321062 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 12:11:32.321062 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:11:32.321451 kubelet[2625]: I0515 12:11:32.321156 2625 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:11:32.326576 kubelet[2625]: I0515 12:11:32.326547 2625 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 12:11:32.327212 kubelet[2625]: I0515 12:11:32.326681 2625 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:11:32.327212 kubelet[2625]: I0515 12:11:32.326900 2625 server.go:929] "Client rotation is on, will bootstrap in background" May 15 12:11:32.328209 kubelet[2625]: I0515 12:11:32.328184 2625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 12:11:32.330142 kubelet[2625]: I0515 12:11:32.330114 2625 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:11:32.335570 kubelet[2625]: I0515 12:11:32.335548 2625 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:11:32.338133 kubelet[2625]: I0515 12:11:32.338111 2625 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:11:32.338422 kubelet[2625]: I0515 12:11:32.338409 2625 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 12:11:32.338619 kubelet[2625]: I0515 12:11:32.338590 2625 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:11:32.338856 kubelet[2625]: I0515 12:11:32.338671 2625 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:11:32.338989 kubelet[2625]: I0515 12:11:32.338976 2625 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:11:32.339035 kubelet[2625]: I0515 12:11:32.339027 2625 container_manager_linux.go:300] "Creating device plugin manager" May 15 12:11:32.339111 kubelet[2625]: I0515 12:11:32.339103 2625 state_mem.go:36] "Initialized new in-memory state store" May 15 12:11:32.339292 kubelet[2625]: I0515 12:11:32.339281 2625 kubelet.go:408] "Attempting to sync node with API server" May 15 12:11:32.339781 kubelet[2625]: I0515 12:11:32.339765 2625 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:11:32.339941 kubelet[2625]: I0515 12:11:32.339930 2625 kubelet.go:314] "Adding apiserver pod source" May 15 12:11:32.340014 kubelet[2625]: I0515 12:11:32.340006 2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:11:32.340790 kubelet[2625]: I0515 12:11:32.340764 2625 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:11:32.341679 kubelet[2625]: I0515 12:11:32.341584 2625 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:11:32.342046 kubelet[2625]: I0515 12:11:32.342032 2625 server.go:1269] "Started kubelet" May 15 12:11:32.342398 kubelet[2625]: I0515 12:11:32.342340 2625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:11:32.342585 kubelet[2625]: I0515 12:11:32.342571 2625 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:11:32.343051 kubelet[2625]: I0515 12:11:32.342119 2625 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:11:32.344476 kubelet[2625]: I0515 12:11:32.344452 2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:11:32.344785 kubelet[2625]: I0515 12:11:32.344681 2625 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:11:32.344785 kubelet[2625]: I0515 12:11:32.344710 2625 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 12:11:32.345080 kubelet[2625]: I0515 12:11:32.345068 2625 server.go:460] "Adding debug handlers to kubelet server" May 15 12:11:32.345565 kubelet[2625]: I0515 12:11:32.344697 2625 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 12:11:32.345855 kubelet[2625]: I0515 12:11:32.345785 2625 reconciler.go:26] "Reconciler: start to sync state" May 15 12:11:32.345855 kubelet[2625]: E0515 12:11:32.345830 2625 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:11:32.348905 kubelet[2625]: I0515 12:11:32.347738 2625 factory.go:221] Registration of the systemd container factory successfully May 15 12:11:32.348905 kubelet[2625]: I0515 12:11:32.347840 2625 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:11:32.351220 kubelet[2625]: I0515 12:11:32.349927 2625 factory.go:221] Registration of the containerd container factory successfully May 15 12:11:32.351678 kubelet[2625]: E0515 12:11:32.351565 2625 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:11:32.374768 kubelet[2625]: I0515 12:11:32.374722 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:11:32.376431 kubelet[2625]: I0515 12:11:32.376406 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:11:32.376431 kubelet[2625]: I0515 12:11:32.376436 2625 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 12:11:32.376544 kubelet[2625]: I0515 12:11:32.376453 2625 kubelet.go:2321] "Starting kubelet main sync loop" May 15 12:11:32.376544 kubelet[2625]: E0515 12:11:32.376493 2625 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:11:32.399094 kubelet[2625]: I0515 12:11:32.397849 2625 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 12:11:32.399094 kubelet[2625]: I0515 12:11:32.397865 2625 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 12:11:32.399094 kubelet[2625]: I0515 12:11:32.397885 2625 state_mem.go:36] "Initialized new in-memory state store" May 15 12:11:32.399094 kubelet[2625]: I0515 12:11:32.398045 2625 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 12:11:32.399094 kubelet[2625]: I0515 12:11:32.398055 2625 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 12:11:32.399094 kubelet[2625]: I0515 12:11:32.398086 2625 policy_none.go:49] "None policy: Start" May 15 12:11:32.399318 kubelet[2625]: I0515 12:11:32.399237 2625 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 12:11:32.399318 kubelet[2625]: I0515 12:11:32.399256 2625 state_mem.go:35] "Initializing new in-memory state store" May 15 12:11:32.399421 kubelet[2625]: I0515 12:11:32.399400 2625 state_mem.go:75] "Updated machine memory state" May 15 12:11:32.404521 kubelet[2625]: I0515 12:11:32.404493 2625 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:11:32.404810 kubelet[2625]: I0515 12:11:32.404793 2625 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:11:32.404856 kubelet[2625]: I0515 12:11:32.404812 2625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:11:32.406240 kubelet[2625]: I0515 12:11:32.405611 2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:11:32.484043 kubelet[2625]: E0515 12:11:32.483986 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 12:11:32.506908 kubelet[2625]: I0515 12:11:32.506887 2625 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:11:32.513163 kubelet[2625]: I0515 12:11:32.513134 2625 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 12:11:32.513458 kubelet[2625]: I0515 12:11:32.513370 2625 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 12:11:32.547334 kubelet[2625]: I0515 12:11:32.547300 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:32.547334 kubelet[2625]: I0515 12:11:32.547341 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa4bce6de12938152b4ec5063b72892d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fa4bce6de12938152b4ec5063b72892d\") " pod="kube-system/kube-apiserver-localhost" May 15 12:11:32.547611 kubelet[2625]: I0515 12:11:32.547379 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa4bce6de12938152b4ec5063b72892d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fa4bce6de12938152b4ec5063b72892d\") " pod="kube-system/kube-apiserver-localhost" May 15 12:11:32.547611 kubelet[2625]: I0515 12:11:32.547397 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:32.547611 kubelet[2625]: I0515 12:11:32.547413 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:32.547611 kubelet[2625]: I0515 12:11:32.547447 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:32.547611 kubelet[2625]: I0515 12:11:32.547487 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:11:32.547755 kubelet[2625]: I0515 12:11:32.547523 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 12:11:32.547755 kubelet[2625]: I0515 12:11:32.547552 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa4bce6de12938152b4ec5063b72892d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fa4bce6de12938152b4ec5063b72892d\") " pod="kube-system/kube-apiserver-localhost" May 15 12:11:33.340642 kubelet[2625]: I0515 12:11:33.340551 2625 apiserver.go:52] "Watching apiserver" May 15 12:11:33.344975 kubelet[2625]: I0515 12:11:33.344935 2625 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 12:11:33.396114 kubelet[2625]: E0515 12:11:33.396063 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 12:11:33.409828 kubelet[2625]: I0515 12:11:33.409760 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.409730819 podStartE2EDuration="1.409730819s" podCreationTimestamp="2025-05-15 12:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:11:33.409476922 +0000 UTC m=+1.126022802" watchObservedRunningTime="2025-05-15 12:11:33.409730819 +0000 UTC m=+1.126276699" May 15 12:11:33.416704 kubelet[2625]: I0515 12:11:33.416661 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.416614938 podStartE2EDuration="2.416614938s" podCreationTimestamp="2025-05-15 12:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:11:33.416467838 +0000 UTC m=+1.133013678" watchObservedRunningTime="2025-05-15 12:11:33.416614938 +0000 UTC m=+1.133160818" May 15 12:11:36.958665 sudo[1727]: pam_unix(sudo:session): session closed for user root May 15 12:11:36.959854 sshd[1726]: Connection closed by 10.0.0.1 port 36408 May 15 12:11:36.960281 sshd-session[1724]: pam_unix(sshd:session): session closed for user core May 15 12:11:36.964905 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. May 15 12:11:36.965089 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:36408.service: Deactivated successfully. May 15 12:11:36.967160 systemd[1]: session-7.scope: Deactivated successfully. May 15 12:11:36.967371 systemd[1]: session-7.scope: Consumed 6.676s CPU time, 228.5M memory peak. May 15 12:11:36.969253 systemd-logind[1503]: Removed session 7. May 15 12:11:39.372650 kubelet[2625]: I0515 12:11:39.372356 2625 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 12:11:39.373001 kubelet[2625]: I0515 12:11:39.372828 2625 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 12:11:39.373027 containerd[1522]: time="2025-05-15T12:11:39.372647151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 12:11:40.078467 kubelet[2625]: I0515 12:11:40.078403 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.078386944 podStartE2EDuration="8.078386944s" podCreationTimestamp="2025-05-15 12:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:11:33.425404041 +0000 UTC m=+1.141949961" watchObservedRunningTime="2025-05-15 12:11:40.078386944 +0000 UTC m=+7.794932824" May 15 12:11:40.087412 systemd[1]: Created slice kubepods-besteffort-pode7ca271f_acaf_4df7_9a23_88c1ad47f0d0.slice - libcontainer container kubepods-besteffort-pode7ca271f_acaf_4df7_9a23_88c1ad47f0d0.slice. May 15 12:11:40.100183 kubelet[2625]: I0515 12:11:40.100049 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7ca271f-acaf-4df7-9a23-88c1ad47f0d0-kube-proxy\") pod \"kube-proxy-v4rn2\" (UID: \"e7ca271f-acaf-4df7-9a23-88c1ad47f0d0\") " pod="kube-system/kube-proxy-v4rn2" May 15 12:11:40.100183 kubelet[2625]: I0515 12:11:40.100099 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7ca271f-acaf-4df7-9a23-88c1ad47f0d0-xtables-lock\") pod \"kube-proxy-v4rn2\" (UID: \"e7ca271f-acaf-4df7-9a23-88c1ad47f0d0\") " pod="kube-system/kube-proxy-v4rn2" May 15 12:11:40.100183 kubelet[2625]: I0515 12:11:40.100135 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7ca271f-acaf-4df7-9a23-88c1ad47f0d0-lib-modules\") pod \"kube-proxy-v4rn2\" (UID: \"e7ca271f-acaf-4df7-9a23-88c1ad47f0d0\") " pod="kube-system/kube-proxy-v4rn2" May 15 12:11:40.100183 kubelet[2625]: I0515 12:11:40.100154 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf8r8\" (UniqueName: \"kubernetes.io/projected/e7ca271f-acaf-4df7-9a23-88c1ad47f0d0-kube-api-access-sf8r8\") pod \"kube-proxy-v4rn2\" (UID: \"e7ca271f-acaf-4df7-9a23-88c1ad47f0d0\") " pod="kube-system/kube-proxy-v4rn2" May 15 12:11:40.400311 containerd[1522]: time="2025-05-15T12:11:40.400273398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4rn2,Uid:e7ca271f-acaf-4df7-9a23-88c1ad47f0d0,Namespace:kube-system,Attempt:0,}" May 15 12:11:40.415230 containerd[1522]: time="2025-05-15T12:11:40.414880959Z" level=info msg="connecting to shim 8a7adadd6f5410efa25702ad6974ab2f8aa9e92583f8c0f7a7944cf0dddad17c" address="unix:///run/containerd/s/e659e822454eab508d2df604186430136f2827f1f3005abda0dd926cbd55d617" namespace=k8s.io protocol=ttrpc version=3 May 15 12:11:40.453395 systemd[1]: Started cri-containerd-8a7adadd6f5410efa25702ad6974ab2f8aa9e92583f8c0f7a7944cf0dddad17c.scope - libcontainer container 8a7adadd6f5410efa25702ad6974ab2f8aa9e92583f8c0f7a7944cf0dddad17c. May 15 12:11:40.465534 systemd[1]: Created slice kubepods-besteffort-pod1f842b83_f8cd_41cf_a38a_55695f4d6f46.slice - libcontainer container kubepods-besteffort-pod1f842b83_f8cd_41cf_a38a_55695f4d6f46.slice. May 15 12:11:40.481469 containerd[1522]: time="2025-05-15T12:11:40.481420442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4rn2,Uid:e7ca271f-acaf-4df7-9a23-88c1ad47f0d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a7adadd6f5410efa25702ad6974ab2f8aa9e92583f8c0f7a7944cf0dddad17c\"" May 15 12:11:40.484441 containerd[1522]: time="2025-05-15T12:11:40.484400351Z" level=info msg="CreateContainer within sandbox \"8a7adadd6f5410efa25702ad6974ab2f8aa9e92583f8c0f7a7944cf0dddad17c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 12:11:40.492498 containerd[1522]: time="2025-05-15T12:11:40.492356239Z" level=info msg="Container 8a0811dbff75f062a0dde76902e34ffb321c23e86a875496ca629d3fd24ef5bd: CDI devices from CRI Config.CDIDevices: []" May 15 12:11:40.500875 containerd[1522]: time="2025-05-15T12:11:40.500833877Z" level=info msg="CreateContainer within sandbox \"8a7adadd6f5410efa25702ad6974ab2f8aa9e92583f8c0f7a7944cf0dddad17c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a0811dbff75f062a0dde76902e34ffb321c23e86a875496ca629d3fd24ef5bd\"" May 15 12:11:40.501338 containerd[1522]: time="2025-05-15T12:11:40.501315600Z" level=info msg="StartContainer for \"8a0811dbff75f062a0dde76902e34ffb321c23e86a875496ca629d3fd24ef5bd\"" May 15 12:11:40.504396 kubelet[2625]: I0515 12:11:40.503074 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f842b83-f8cd-41cf-a38a-55695f4d6f46-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-qfwj5\" (UID: \"1f842b83-f8cd-41cf-a38a-55695f4d6f46\") " pod="tigera-operator/tigera-operator-6f6897fdc5-qfwj5" May 15 12:11:40.504396 kubelet[2625]: I0515 12:11:40.503116 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xmxj\" (UniqueName: \"kubernetes.io/projected/1f842b83-f8cd-41cf-a38a-55695f4d6f46-kube-api-access-5xmxj\") pod \"tigera-operator-6f6897fdc5-qfwj5\" (UID: \"1f842b83-f8cd-41cf-a38a-55695f4d6f46\") " pod="tigera-operator/tigera-operator-6f6897fdc5-qfwj5" May 15 12:11:40.506264 containerd[1522]: time="2025-05-15T12:11:40.506228320Z" level=info msg="connecting to shim 8a0811dbff75f062a0dde76902e34ffb321c23e86a875496ca629d3fd24ef5bd" address="unix:///run/containerd/s/e659e822454eab508d2df604186430136f2827f1f3005abda0dd926cbd55d617" protocol=ttrpc version=3 May 15 12:11:40.524391 systemd[1]: Started cri-containerd-8a0811dbff75f062a0dde76902e34ffb321c23e86a875496ca629d3fd24ef5bd.scope - libcontainer container 8a0811dbff75f062a0dde76902e34ffb321c23e86a875496ca629d3fd24ef5bd. May 15 12:11:40.561919 containerd[1522]: time="2025-05-15T12:11:40.561859636Z" level=info msg="StartContainer for \"8a0811dbff75f062a0dde76902e34ffb321c23e86a875496ca629d3fd24ef5bd\" returns successfully" May 15 12:11:40.771233 containerd[1522]: time="2025-05-15T12:11:40.770921527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-qfwj5,Uid:1f842b83-f8cd-41cf-a38a-55695f4d6f46,Namespace:tigera-operator,Attempt:0,}" May 15 12:11:40.789229 containerd[1522]: time="2025-05-15T12:11:40.787951539Z" level=info msg="connecting to shim 1b01f849c7c21afb466ea5b15c39339cebeadcefab5d7cc7721a94874a87754f" address="unix:///run/containerd/s/51c2911c3221a8e613469c395183238bebae52b7d51c1d1374fe2a801e74d59d" namespace=k8s.io protocol=ttrpc version=3 May 15 12:11:40.810345 systemd[1]: Started cri-containerd-1b01f849c7c21afb466ea5b15c39339cebeadcefab5d7cc7721a94874a87754f.scope - libcontainer container 1b01f849c7c21afb466ea5b15c39339cebeadcefab5d7cc7721a94874a87754f. May 15 12:11:40.840636 containerd[1522]: time="2025-05-15T12:11:40.840594508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-qfwj5,Uid:1f842b83-f8cd-41cf-a38a-55695f4d6f46,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b01f849c7c21afb466ea5b15c39339cebeadcefab5d7cc7721a94874a87754f\"" May 15 12:11:40.842848 containerd[1522]: time="2025-05-15T12:11:40.842297593Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 12:11:41.754858 kubelet[2625]: I0515 12:11:41.754504 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v4rn2" podStartSLOduration=1.754487046 podStartE2EDuration="1.754487046s" podCreationTimestamp="2025-05-15 12:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:11:41.414851795 +0000 UTC m=+9.131397675" watchObservedRunningTime="2025-05-15 12:11:41.754487046 +0000 UTC m=+9.471032926" May 15 12:11:42.087500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035635892.mount: Deactivated successfully. May 15 12:11:42.881764 containerd[1522]: time="2025-05-15T12:11:42.881706871Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 12:11:42.885097 containerd[1522]: time="2025-05-15T12:11:42.885058686Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.042729942s" May 15 12:11:42.885097 containerd[1522]: time="2025-05-15T12:11:42.885092476Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 12:11:42.888359 containerd[1522]: time="2025-05-15T12:11:42.887910614Z" level=info msg="CreateContainer within sandbox \"1b01f849c7c21afb466ea5b15c39339cebeadcefab5d7cc7721a94874a87754f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 12:11:42.895355 containerd[1522]: time="2025-05-15T12:11:42.894756641Z" level=info msg="Container afea45952cc84b00c6a41a81e66f37dc467f594ccb2c1208269000cf9253d4f2: CDI devices from CRI Config.CDIDevices: []" May 15 12:11:42.897267 containerd[1522]: time="2025-05-15T12:11:42.897181540Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:42.898594 containerd[1522]: time="2025-05-15T12:11:42.898478663Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:42.899309 containerd[1522]: time="2025-05-15T12:11:42.899276419Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:42.899911 containerd[1522]: time="2025-05-15T12:11:42.899871517Z" level=info msg="CreateContainer within sandbox \"1b01f849c7c21afb466ea5b15c39339cebeadcefab5d7cc7721a94874a87754f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"afea45952cc84b00c6a41a81e66f37dc467f594ccb2c1208269000cf9253d4f2\"" May 15 12:11:42.900440 containerd[1522]: time="2025-05-15T12:11:42.900335695Z" level=info msg="StartContainer for \"afea45952cc84b00c6a41a81e66f37dc467f594ccb2c1208269000cf9253d4f2\"" May 15 12:11:42.901973 containerd[1522]: time="2025-05-15T12:11:42.901932847Z" level=info msg="connecting to shim afea45952cc84b00c6a41a81e66f37dc467f594ccb2c1208269000cf9253d4f2" address="unix:///run/containerd/s/51c2911c3221a8e613469c395183238bebae52b7d51c1d1374fe2a801e74d59d" protocol=ttrpc version=3 May 15 12:11:42.937361 systemd[1]: Started cri-containerd-afea45952cc84b00c6a41a81e66f37dc467f594ccb2c1208269000cf9253d4f2.scope - libcontainer container afea45952cc84b00c6a41a81e66f37dc467f594ccb2c1208269000cf9253d4f2. May 15 12:11:42.965833 containerd[1522]: time="2025-05-15T12:11:42.965789924Z" level=info msg="StartContainer for \"afea45952cc84b00c6a41a81e66f37dc467f594ccb2c1208269000cf9253d4f2\" returns successfully" May 15 12:11:43.427256 kubelet[2625]: I0515 12:11:43.427176 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-qfwj5" podStartSLOduration=1.382016506 podStartE2EDuration="3.427159618s" podCreationTimestamp="2025-05-15 12:11:40 +0000 UTC" firstStartedPulling="2025-05-15 12:11:40.841675556 +0000 UTC m=+8.558221436" lastFinishedPulling="2025-05-15 12:11:42.886818668 +0000 UTC m=+10.603364548" observedRunningTime="2025-05-15 12:11:43.418246977 +0000 UTC m=+11.134792857" watchObservedRunningTime="2025-05-15 12:11:43.427159618 +0000 UTC m=+11.143705498" May 15 12:11:46.967529 systemd[1]: Created slice kubepods-besteffort-podd7f8f540_5b07_488a_9b15_872f4e2bc986.slice - libcontainer container kubepods-besteffort-podd7f8f540_5b07_488a_9b15_872f4e2bc986.slice. May 15 12:11:47.011210 systemd[1]: Created slice kubepods-besteffort-pod22748c37_6c06_4c0e_bd85_a7361507b3c0.slice - libcontainer container kubepods-besteffort-pod22748c37_6c06_4c0e_bd85_a7361507b3c0.slice. May 15 12:11:47.045335 kubelet[2625]: I0515 12:11:47.045282 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22748c37-6c06-4c0e-bd85-a7361507b3c0-tigera-ca-bundle\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045335 kubelet[2625]: I0515 12:11:47.045328 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d7f8f540-5b07-488a-9b15-872f4e2bc986-typha-certs\") pod \"calico-typha-86c498c8d9-29lwk\" (UID: \"d7f8f540-5b07-488a-9b15-872f4e2bc986\") " pod="calico-system/calico-typha-86c498c8d9-29lwk" May 15 12:11:47.045335 kubelet[2625]: I0515 12:11:47.045345 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-var-run-calico\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045729 kubelet[2625]: I0515 12:11:47.045363 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-cni-log-dir\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045729 kubelet[2625]: I0515 12:11:47.045380 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/22748c37-6c06-4c0e-bd85-a7361507b3c0-node-certs\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045729 kubelet[2625]: I0515 12:11:47.045400 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-cni-bin-dir\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045729 kubelet[2625]: I0515 12:11:47.045415 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-flexvol-driver-host\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045729 kubelet[2625]: I0515 12:11:47.045432 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjh5m\" (UniqueName: \"kubernetes.io/projected/22748c37-6c06-4c0e-bd85-a7361507b3c0-kube-api-access-tjh5m\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045847 kubelet[2625]: I0515 12:11:47.045447 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7f8f540-5b07-488a-9b15-872f4e2bc986-tigera-ca-bundle\") pod \"calico-typha-86c498c8d9-29lwk\" (UID: \"d7f8f540-5b07-488a-9b15-872f4e2bc986\") " pod="calico-system/calico-typha-86c498c8d9-29lwk" May 15 12:11:47.045847 kubelet[2625]: I0515 12:11:47.045462 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-policysync\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045847 kubelet[2625]: I0515 12:11:47.045476 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-var-lib-calico\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045847 kubelet[2625]: I0515 12:11:47.045491 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7s9b\" (UniqueName: \"kubernetes.io/projected/d7f8f540-5b07-488a-9b15-872f4e2bc986-kube-api-access-z7s9b\") pod \"calico-typha-86c498c8d9-29lwk\" (UID: \"d7f8f540-5b07-488a-9b15-872f4e2bc986\") " pod="calico-system/calico-typha-86c498c8d9-29lwk" May 15 12:11:47.045847 kubelet[2625]: I0515 12:11:47.045505 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-xtables-lock\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045950 kubelet[2625]: I0515 12:11:47.045521 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-cni-net-dir\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.045950 kubelet[2625]: I0515 12:11:47.045541 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22748c37-6c06-4c0e-bd85-a7361507b3c0-lib-modules\") pod \"calico-node-tt2hq\" (UID: \"22748c37-6c06-4c0e-bd85-a7361507b3c0\") " pod="calico-system/calico-node-tt2hq" May 15 12:11:47.117935 kubelet[2625]: E0515 12:11:47.117881 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:11:47.146580 kubelet[2625]: I0515 12:11:47.146535 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8tp9\" (UniqueName: \"kubernetes.io/projected/6eb3b675-ac76-4d65-8306-ee2c30f0c7f1-kube-api-access-q8tp9\") pod \"csi-node-driver-jl6hb\" (UID: \"6eb3b675-ac76-4d65-8306-ee2c30f0c7f1\") " pod="calico-system/csi-node-driver-jl6hb" May 15 12:11:47.148216 kubelet[2625]: I0515 12:11:47.146832 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6eb3b675-ac76-4d65-8306-ee2c30f0c7f1-socket-dir\") pod \"csi-node-driver-jl6hb\" (UID: \"6eb3b675-ac76-4d65-8306-ee2c30f0c7f1\") " pod="calico-system/csi-node-driver-jl6hb" May 15 12:11:47.158132 kubelet[2625]: I0515 12:11:47.147080 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6eb3b675-ac76-4d65-8306-ee2c30f0c7f1-varrun\") pod \"csi-node-driver-jl6hb\" (UID: \"6eb3b675-ac76-4d65-8306-ee2c30f0c7f1\") " pod="calico-system/csi-node-driver-jl6hb" May 15 12:11:47.158467 kubelet[2625]: I0515 12:11:47.158434 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6eb3b675-ac76-4d65-8306-ee2c30f0c7f1-kubelet-dir\") pod \"csi-node-driver-jl6hb\" (UID: \"6eb3b675-ac76-4d65-8306-ee2c30f0c7f1\") " pod="calico-system/csi-node-driver-jl6hb" May 15 12:11:47.158531 kubelet[2625]: I0515 12:11:47.158476 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6eb3b675-ac76-4d65-8306-ee2c30f0c7f1-registration-dir\") pod \"csi-node-driver-jl6hb\" (UID: \"6eb3b675-ac76-4d65-8306-ee2c30f0c7f1\") " pod="calico-system/csi-node-driver-jl6hb" May 15 12:11:47.168212 kubelet[2625]: E0515 12:11:47.166320 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.168212 kubelet[2625]: W0515 12:11:47.166340 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.168212 kubelet[2625]: E0515 12:11:47.166363 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.168212 kubelet[2625]: E0515 12:11:47.166933 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.168212 kubelet[2625]: W0515 12:11:47.166947 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.168212 kubelet[2625]: E0515 12:11:47.166961 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.171686 kubelet[2625]: E0515 12:11:47.171651 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.171686 kubelet[2625]: W0515 12:11:47.171668 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.171686 kubelet[2625]: E0515 12:11:47.171682 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.179602 kubelet[2625]: E0515 12:11:47.179497 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.179602 kubelet[2625]: W0515 12:11:47.179515 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.179602 kubelet[2625]: E0515 12:11:47.179531 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.260300 kubelet[2625]: E0515 12:11:47.259731 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.260300 kubelet[2625]: W0515 12:11:47.260110 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.260300 kubelet[2625]: E0515 12:11:47.260134 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.260930 kubelet[2625]: E0515 12:11:47.260912 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.260991 kubelet[2625]: W0515 12:11:47.260979 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.261052 kubelet[2625]: E0515 12:11:47.261041 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.261484 kubelet[2625]: E0515 12:11:47.261357 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.261484 kubelet[2625]: W0515 12:11:47.261371 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.261484 kubelet[2625]: E0515 12:11:47.261388 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.261651 kubelet[2625]: E0515 12:11:47.261638 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.261718 kubelet[2625]: W0515 12:11:47.261707 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.261824 kubelet[2625]: E0515 12:11:47.261796 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.261987 kubelet[2625]: E0515 12:11:47.261975 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.262048 kubelet[2625]: W0515 12:11:47.262037 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.262144 kubelet[2625]: E0515 12:11:47.262110 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.262320 kubelet[2625]: E0515 12:11:47.262307 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.262402 kubelet[2625]: W0515 12:11:47.262391 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.262465 kubelet[2625]: E0515 12:11:47.262454 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.262745 kubelet[2625]: E0515 12:11:47.262640 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.262745 kubelet[2625]: W0515 12:11:47.262652 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.262745 kubelet[2625]: E0515 12:11:47.262667 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.262897 kubelet[2625]: E0515 12:11:47.262885 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.262948 kubelet[2625]: W0515 12:11:47.262938 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.263030 kubelet[2625]: E0515 12:11:47.263011 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.263282 kubelet[2625]: E0515 12:11:47.263176 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.263282 kubelet[2625]: W0515 12:11:47.263202 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.263282 kubelet[2625]: E0515 12:11:47.263228 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.263437 kubelet[2625]: E0515 12:11:47.263424 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.263489 kubelet[2625]: W0515 12:11:47.263478 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.263568 kubelet[2625]: E0515 12:11:47.263547 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.263745 kubelet[2625]: E0515 12:11:47.263732 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.263809 kubelet[2625]: W0515 12:11:47.263798 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.263882 kubelet[2625]: E0515 12:11:47.263866 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.264056 kubelet[2625]: E0515 12:11:47.264043 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.264114 kubelet[2625]: W0515 12:11:47.264103 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.264182 kubelet[2625]: E0515 12:11:47.264167 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.264397 kubelet[2625]: E0515 12:11:47.264383 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.264465 kubelet[2625]: W0515 12:11:47.264454 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.264538 kubelet[2625]: E0515 12:11:47.264523 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.264719 kubelet[2625]: E0515 12:11:47.264705 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.264858 kubelet[2625]: W0515 12:11:47.264773 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.264858 kubelet[2625]: E0515 12:11:47.264799 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.264974 kubelet[2625]: E0515 12:11:47.264963 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.265026 kubelet[2625]: W0515 12:11:47.265016 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.265091 kubelet[2625]: E0515 12:11:47.265076 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.265371 kubelet[2625]: E0515 12:11:47.265284 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.265371 kubelet[2625]: W0515 12:11:47.265297 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.265371 kubelet[2625]: E0515 12:11:47.265318 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.265524 kubelet[2625]: E0515 12:11:47.265512 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.265576 kubelet[2625]: W0515 12:11:47.265566 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.265648 kubelet[2625]: E0515 12:11:47.265632 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.265951 kubelet[2625]: E0515 12:11:47.265815 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.265951 kubelet[2625]: W0515 12:11:47.265828 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.265951 kubelet[2625]: E0515 12:11:47.265844 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.266086 kubelet[2625]: E0515 12:11:47.266074 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.266143 kubelet[2625]: W0515 12:11:47.266132 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.266229 kubelet[2625]: E0515 12:11:47.266216 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.266478 kubelet[2625]: E0515 12:11:47.266426 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.266478 kubelet[2625]: W0515 12:11:47.266443 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.266478 kubelet[2625]: E0515 12:11:47.266460 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.266591 kubelet[2625]: E0515 12:11:47.266579 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.266591 kubelet[2625]: W0515 12:11:47.266589 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.266640 kubelet[2625]: E0515 12:11:47.266607 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.266740 kubelet[2625]: E0515 12:11:47.266726 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.266740 kubelet[2625]: W0515 12:11:47.266735 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.266740 kubelet[2625]: E0515 12:11:47.266764 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.266903 kubelet[2625]: E0515 12:11:47.266872 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.266903 kubelet[2625]: W0515 12:11:47.266880 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.266987 kubelet[2625]: E0515 12:11:47.266932 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.267265 kubelet[2625]: E0515 12:11:47.267017 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.267265 kubelet[2625]: W0515 12:11:47.267028 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.267265 kubelet[2625]: E0515 12:11:47.267036 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.268162 kubelet[2625]: E0515 12:11:47.268126 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.268162 kubelet[2625]: W0515 12:11:47.268141 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.268162 kubelet[2625]: E0515 12:11:47.268156 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.272006 containerd[1522]: time="2025-05-15T12:11:47.271962639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86c498c8d9-29lwk,Uid:d7f8f540-5b07-488a-9b15-872f4e2bc986,Namespace:calico-system,Attempt:0,}" May 15 12:11:47.278531 kubelet[2625]: E0515 12:11:47.278508 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:11:47.278531 kubelet[2625]: W0515 12:11:47.278526 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:11:47.278619 kubelet[2625]: E0515 12:11:47.278542 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:11:47.307828 containerd[1522]: time="2025-05-15T12:11:47.307732068Z" level=info msg="connecting to shim bc784251925700719b557669e9ab1c5141dd9d13d3900e4f7205b953de4e22d5" address="unix:///run/containerd/s/6947af90c9d3f71383fbd05010f13eac4441f0e974acd45cbc1a2c1bbfa64a9c" namespace=k8s.io protocol=ttrpc version=3 May 15 12:11:47.314844 containerd[1522]: time="2025-05-15T12:11:47.314743320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tt2hq,Uid:22748c37-6c06-4c0e-bd85-a7361507b3c0,Namespace:calico-system,Attempt:0,}" May 15 12:11:47.329710 containerd[1522]: time="2025-05-15T12:11:47.329669146Z" level=info msg="connecting to shim c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674" address="unix:///run/containerd/s/360931ffb1b5ff10174b47735a5f4cb96a5418af62482d332f82b022691cedf1" namespace=k8s.io protocol=ttrpc version=3 May 15 12:11:47.344412 systemd[1]: Started cri-containerd-bc784251925700719b557669e9ab1c5141dd9d13d3900e4f7205b953de4e22d5.scope - libcontainer container bc784251925700719b557669e9ab1c5141dd9d13d3900e4f7205b953de4e22d5. May 15 12:11:47.369556 systemd[1]: Started cri-containerd-c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674.scope - libcontainer container c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674. May 15 12:11:47.436935 containerd[1522]: time="2025-05-15T12:11:47.436897136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tt2hq,Uid:22748c37-6c06-4c0e-bd85-a7361507b3c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674\"" May 15 12:11:47.439478 containerd[1522]: time="2025-05-15T12:11:47.439018423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86c498c8d9-29lwk,Uid:d7f8f540-5b07-488a-9b15-872f4e2bc986,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc784251925700719b557669e9ab1c5141dd9d13d3900e4f7205b953de4e22d5\"" May 15 12:11:47.443066 containerd[1522]: time="2025-05-15T12:11:47.443038134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 12:11:48.377458 kubelet[2625]: E0515 12:11:48.377134 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:11:48.466352 update_engine[1514]: I20250515 12:11:48.466272 1514 update_attempter.cc:509] Updating boot flags... May 15 12:11:50.377745 kubelet[2625]: E0515 12:11:50.377693 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:11:51.598470 containerd[1522]: time="2025-05-15T12:11:51.598421879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:51.599478 containerd[1522]: time="2025-05-15T12:11:51.599019302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 12:11:51.601340 containerd[1522]: time="2025-05-15T12:11:51.600261657Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:51.602540 containerd[1522]: time="2025-05-15T12:11:51.602503222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:51.603719 containerd[1522]: time="2025-05-15T12:11:51.603239253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 4.160166447s" May 15 12:11:51.603719 containerd[1522]: time="2025-05-15T12:11:51.603284362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 12:11:51.605673 containerd[1522]: time="2025-05-15T12:11:51.605632463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 12:11:51.608415 containerd[1522]: time="2025-05-15T12:11:51.608252581Z" level=info msg="CreateContainer within sandbox \"c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 12:11:51.619372 containerd[1522]: time="2025-05-15T12:11:51.618333105Z" level=info msg="Container b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b: CDI devices from CRI Config.CDIDevices: []" May 15 12:11:51.625727 containerd[1522]: time="2025-05-15T12:11:51.625676098Z" level=info msg="CreateContainer within sandbox \"c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b\"" May 15 12:11:51.626460 containerd[1522]: time="2025-05-15T12:11:51.626433324Z" level=info msg="StartContainer for \"b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b\"" May 15 12:11:51.627863 containerd[1522]: time="2025-05-15T12:11:51.627829683Z" level=info msg="connecting to shim b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b" address="unix:///run/containerd/s/360931ffb1b5ff10174b47735a5f4cb96a5418af62482d332f82b022691cedf1" protocol=ttrpc version=3 May 15 12:11:51.666386 systemd[1]: Started cri-containerd-b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b.scope - libcontainer container b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b. May 15 12:11:51.719859 containerd[1522]: time="2025-05-15T12:11:51.719808473Z" level=info msg="StartContainer for \"b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b\" returns successfully" May 15 12:11:51.766992 systemd[1]: cri-containerd-b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b.scope: Deactivated successfully. May 15 12:11:51.785229 containerd[1522]: time="2025-05-15T12:11:51.785166099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b\" id:\"b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b\" pid:3174 exited_at:{seconds:1747311111 nanos:780153770}" May 15 12:11:51.787652 containerd[1522]: time="2025-05-15T12:11:51.787601859Z" level=info msg="received exit event container_id:\"b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b\" id:\"b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b\" pid:3174 exited_at:{seconds:1747311111 nanos:780153770}" May 15 12:11:51.825849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1c3a093feb050ff8731092c2c750c63847315931179cb60b5b1049e5333684b-rootfs.mount: Deactivated successfully. May 15 12:11:52.377887 kubelet[2625]: E0515 12:11:52.377801 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:11:54.377791 kubelet[2625]: E0515 12:11:54.377409 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:11:56.386719 kubelet[2625]: E0515 12:11:56.378570 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:11:58.377034 kubelet[2625]: E0515 12:11:58.376966 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:11:58.944938 containerd[1522]: time="2025-05-15T12:11:58.944882295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:58.945299 containerd[1522]: time="2025-05-15T12:11:58.945243668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 12:11:58.946270 containerd[1522]: time="2025-05-15T12:11:58.946235766Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:58.955299 containerd[1522]: time="2025-05-15T12:11:58.955251428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:11:58.956616 containerd[1522]: time="2025-05-15T12:11:58.956572305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 7.350890653s" May 15 12:11:58.956616 containerd[1522]: time="2025-05-15T12:11:58.956619736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 12:11:58.957763 containerd[1522]: time="2025-05-15T12:11:58.957733931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 12:11:58.971569 containerd[1522]: time="2025-05-15T12:11:58.971530873Z" level=info msg="CreateContainer within sandbox \"bc784251925700719b557669e9ab1c5141dd9d13d3900e4f7205b953de4e22d5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 12:11:58.981287 containerd[1522]: time="2025-05-15T12:11:58.980429036Z" level=info msg="Container 4dbdff3ee9f3a116f9196c69343ca54f11d983483823d8dcbc036154ea078ed0: CDI devices from CRI Config.CDIDevices: []" May 15 12:11:59.047984 containerd[1522]: time="2025-05-15T12:11:59.047942367Z" level=info msg="CreateContainer within sandbox \"bc784251925700719b557669e9ab1c5141dd9d13d3900e4f7205b953de4e22d5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4dbdff3ee9f3a116f9196c69343ca54f11d983483823d8dcbc036154ea078ed0\"" May 15 12:11:59.049097 containerd[1522]: time="2025-05-15T12:11:59.049026413Z" level=info msg="StartContainer for \"4dbdff3ee9f3a116f9196c69343ca54f11d983483823d8dcbc036154ea078ed0\"" May 15 12:11:59.050481 containerd[1522]: time="2025-05-15T12:11:59.050451679Z" level=info msg="connecting to shim 4dbdff3ee9f3a116f9196c69343ca54f11d983483823d8dcbc036154ea078ed0" address="unix:///run/containerd/s/6947af90c9d3f71383fbd05010f13eac4441f0e974acd45cbc1a2c1bbfa64a9c" protocol=ttrpc version=3 May 15 12:11:59.073377 systemd[1]: Started cri-containerd-4dbdff3ee9f3a116f9196c69343ca54f11d983483823d8dcbc036154ea078ed0.scope - libcontainer container 4dbdff3ee9f3a116f9196c69343ca54f11d983483823d8dcbc036154ea078ed0. May 15 12:11:59.136345 containerd[1522]: time="2025-05-15T12:11:59.136299982Z" level=info msg="StartContainer for \"4dbdff3ee9f3a116f9196c69343ca54f11d983483823d8dcbc036154ea078ed0\" returns successfully" May 15 12:11:59.455627 kubelet[2625]: I0515 12:11:59.455554 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86c498c8d9-29lwk" podStartSLOduration=1.939589081 podStartE2EDuration="13.455536816s" podCreationTimestamp="2025-05-15 12:11:46 +0000 UTC" firstStartedPulling="2025-05-15 12:11:47.441590952 +0000 UTC m=+15.158136832" lastFinishedPulling="2025-05-15 12:11:58.957538687 +0000 UTC m=+26.674084567" observedRunningTime="2025-05-15 12:11:59.454174739 +0000 UTC m=+27.170720619" watchObservedRunningTime="2025-05-15 12:11:59.455536816 +0000 UTC m=+27.172082696" May 15 12:12:00.377490 kubelet[2625]: E0515 12:12:00.377430 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:12:02.377796 kubelet[2625]: E0515 12:12:02.377724 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:12:04.064441 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:35932.service - OpenSSH per-connection server daemon (10.0.0.1:35932). May 15 12:12:04.139053 sshd[3265]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:04.140402 sshd-session[3265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:04.144945 systemd-logind[1503]: New session 8 of user core. May 15 12:12:04.157344 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 12:12:04.275629 sshd[3267]: Connection closed by 10.0.0.1 port 35932 May 15 12:12:04.275922 sshd-session[3265]: pam_unix(sshd:session): session closed for user core May 15 12:12:04.278511 systemd[1]: session-8.scope: Deactivated successfully. May 15 12:12:04.279454 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:35932.service: Deactivated successfully. May 15 12:12:04.282718 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. May 15 12:12:04.284134 systemd-logind[1503]: Removed session 8. May 15 12:12:04.378208 kubelet[2625]: E0515 12:12:04.377785 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:12:06.378289 kubelet[2625]: E0515 12:12:06.378142 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:12:07.424067 containerd[1522]: time="2025-05-15T12:12:07.424020212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:07.425045 containerd[1522]: time="2025-05-15T12:12:07.424636287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 12:12:07.425564 containerd[1522]: time="2025-05-15T12:12:07.425528524Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:07.427403 containerd[1522]: time="2025-05-15T12:12:07.427376429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:07.428119 containerd[1522]: time="2025-05-15T12:12:07.428024139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 8.470256054s" May 15 12:12:07.428119 containerd[1522]: time="2025-05-15T12:12:07.428056575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 12:12:07.430962 containerd[1522]: time="2025-05-15T12:12:07.430928858Z" level=info msg="CreateContainer within sandbox \"c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 12:12:07.441142 containerd[1522]: time="2025-05-15T12:12:07.441093533Z" level=info msg="Container 80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:07.448256 containerd[1522]: time="2025-05-15T12:12:07.448220428Z" level=info msg="CreateContainer within sandbox \"c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee\"" May 15 12:12:07.448941 containerd[1522]: time="2025-05-15T12:12:07.448918011Z" level=info msg="StartContainer for \"80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee\"" May 15 12:12:07.450600 containerd[1522]: time="2025-05-15T12:12:07.450546106Z" level=info msg="connecting to shim 80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee" address="unix:///run/containerd/s/360931ffb1b5ff10174b47735a5f4cb96a5418af62482d332f82b022691cedf1" protocol=ttrpc version=3 May 15 12:12:07.470408 systemd[1]: Started cri-containerd-80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee.scope - libcontainer container 80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee. May 15 12:12:07.503501 containerd[1522]: time="2025-05-15T12:12:07.503428997Z" level=info msg="StartContainer for \"80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee\" returns successfully" May 15 12:12:07.972777 systemd[1]: cri-containerd-80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee.scope: Deactivated successfully. May 15 12:12:07.973228 systemd[1]: cri-containerd-80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee.scope: Consumed 448ms CPU time, 158.3M memory peak, 4K read from disk, 150.3M written to disk. May 15 12:12:07.975305 containerd[1522]: time="2025-05-15T12:12:07.975248983Z" level=info msg="received exit event container_id:\"80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee\" id:\"80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee\" pid:3303 exited_at:{seconds:1747311127 nanos:974814643}" May 15 12:12:07.975438 containerd[1522]: time="2025-05-15T12:12:07.975405481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee\" id:\"80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee\" pid:3303 exited_at:{seconds:1747311127 nanos:974814643}" May 15 12:12:07.992119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80801c138f41870613686234954cdd0fab7f9bcf8c9c4ef10ef8c1d4dd9319ee-rootfs.mount: Deactivated successfully. May 15 12:12:08.064967 kubelet[2625]: I0515 12:12:08.064845 2625 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 12:12:08.157643 systemd[1]: Created slice kubepods-besteffort-pod033e6c41_c7e1_4f89_99b1_69274c65f4ef.slice - libcontainer container kubepods-besteffort-pod033e6c41_c7e1_4f89_99b1_69274c65f4ef.slice. May 15 12:12:08.164362 systemd[1]: Created slice kubepods-besteffort-podc2182c51_05db_4f74_8a4c_820fa5a2345e.slice - libcontainer container kubepods-besteffort-podc2182c51_05db_4f74_8a4c_820fa5a2345e.slice. May 15 12:12:08.172999 systemd[1]: Created slice kubepods-burstable-podec08a749_bfa9_479d_ad96_3a7485e9dad7.slice - libcontainer container kubepods-burstable-podec08a749_bfa9_479d_ad96_3a7485e9dad7.slice. May 15 12:12:08.179259 systemd[1]: Created slice kubepods-burstable-pod5dd74982_3b3d_4f17_955c_aa7334cd7c5a.slice - libcontainer container kubepods-burstable-pod5dd74982_3b3d_4f17_955c_aa7334cd7c5a.slice. May 15 12:12:08.186607 systemd[1]: Created slice kubepods-besteffort-pod277b8b69_903f_40cf_a8b0_2af268672361.slice - libcontainer container kubepods-besteffort-pod277b8b69_903f_40cf_a8b0_2af268672361.slice. May 15 12:12:08.199010 kubelet[2625]: I0515 12:12:08.198654 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec08a749-bfa9-479d-ad96-3a7485e9dad7-config-volume\") pod \"coredns-6f6b679f8f-mjw8m\" (UID: \"ec08a749-bfa9-479d-ad96-3a7485e9dad7\") " pod="kube-system/coredns-6f6b679f8f-mjw8m" May 15 12:12:08.199010 kubelet[2625]: I0515 12:12:08.198716 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dd74982-3b3d-4f17-955c-aa7334cd7c5a-config-volume\") pod \"coredns-6f6b679f8f-ll8dl\" (UID: \"5dd74982-3b3d-4f17-955c-aa7334cd7c5a\") " pod="kube-system/coredns-6f6b679f8f-ll8dl" May 15 12:12:08.199010 kubelet[2625]: I0515 12:12:08.198737 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl4m4\" (UniqueName: \"kubernetes.io/projected/277b8b69-903f-40cf-a8b0-2af268672361-kube-api-access-dl4m4\") pod \"calico-apiserver-c7b9b4746-mr7dx\" (UID: \"277b8b69-903f-40cf-a8b0-2af268672361\") " pod="calico-apiserver/calico-apiserver-c7b9b4746-mr7dx" May 15 12:12:08.199010 kubelet[2625]: I0515 12:12:08.198757 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c2182c51-05db-4f74-8a4c-820fa5a2345e-calico-apiserver-certs\") pod \"calico-apiserver-c7b9b4746-x7jtk\" (UID: \"c2182c51-05db-4f74-8a4c-820fa5a2345e\") " pod="calico-apiserver/calico-apiserver-c7b9b4746-x7jtk" May 15 12:12:08.199010 kubelet[2625]: I0515 12:12:08.198775 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97rwz\" (UniqueName: \"kubernetes.io/projected/ec08a749-bfa9-479d-ad96-3a7485e9dad7-kube-api-access-97rwz\") pod \"coredns-6f6b679f8f-mjw8m\" (UID: \"ec08a749-bfa9-479d-ad96-3a7485e9dad7\") " pod="kube-system/coredns-6f6b679f8f-mjw8m" May 15 12:12:08.199271 kubelet[2625]: I0515 12:12:08.198792 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm94t\" (UniqueName: \"kubernetes.io/projected/5dd74982-3b3d-4f17-955c-aa7334cd7c5a-kube-api-access-qm94t\") pod \"coredns-6f6b679f8f-ll8dl\" (UID: \"5dd74982-3b3d-4f17-955c-aa7334cd7c5a\") " pod="kube-system/coredns-6f6b679f8f-ll8dl" May 15 12:12:08.199271 kubelet[2625]: I0515 12:12:08.198807 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/033e6c41-c7e1-4f89-99b1-69274c65f4ef-tigera-ca-bundle\") pod \"calico-kube-controllers-6588b76679-g7dkc\" (UID: \"033e6c41-c7e1-4f89-99b1-69274c65f4ef\") " pod="calico-system/calico-kube-controllers-6588b76679-g7dkc" May 15 12:12:08.199271 kubelet[2625]: I0515 12:12:08.198823 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgvr\" (UniqueName: \"kubernetes.io/projected/033e6c41-c7e1-4f89-99b1-69274c65f4ef-kube-api-access-svgvr\") pod \"calico-kube-controllers-6588b76679-g7dkc\" (UID: \"033e6c41-c7e1-4f89-99b1-69274c65f4ef\") " pod="calico-system/calico-kube-controllers-6588b76679-g7dkc" May 15 12:12:08.199271 kubelet[2625]: I0515 12:12:08.198843 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/277b8b69-903f-40cf-a8b0-2af268672361-calico-apiserver-certs\") pod \"calico-apiserver-c7b9b4746-mr7dx\" (UID: \"277b8b69-903f-40cf-a8b0-2af268672361\") " pod="calico-apiserver/calico-apiserver-c7b9b4746-mr7dx" May 15 12:12:08.199271 kubelet[2625]: I0515 12:12:08.198862 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5nww\" (UniqueName: \"kubernetes.io/projected/c2182c51-05db-4f74-8a4c-820fa5a2345e-kube-api-access-q5nww\") pod \"calico-apiserver-c7b9b4746-x7jtk\" (UID: \"c2182c51-05db-4f74-8a4c-820fa5a2345e\") " pod="calico-apiserver/calico-apiserver-c7b9b4746-x7jtk" May 15 12:12:08.382742 systemd[1]: Created slice kubepods-besteffort-pod6eb3b675_ac76_4d65_8306_ee2c30f0c7f1.slice - libcontainer container kubepods-besteffort-pod6eb3b675_ac76_4d65_8306_ee2c30f0c7f1.slice. May 15 12:12:08.395114 containerd[1522]: time="2025-05-15T12:12:08.394987308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl6hb,Uid:6eb3b675-ac76-4d65-8306-ee2c30f0c7f1,Namespace:calico-system,Attempt:0,}" May 15 12:12:08.466317 containerd[1522]: time="2025-05-15T12:12:08.466280002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6588b76679-g7dkc,Uid:033e6c41-c7e1-4f89-99b1-69274c65f4ef,Namespace:calico-system,Attempt:0,}" May 15 12:12:08.467752 containerd[1522]: time="2025-05-15T12:12:08.467725049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7b9b4746-x7jtk,Uid:c2182c51-05db-4f74-8a4c-820fa5a2345e,Namespace:calico-apiserver,Attempt:0,}" May 15 12:12:08.475038 containerd[1522]: time="2025-05-15T12:12:08.474906207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 12:12:08.484801 containerd[1522]: time="2025-05-15T12:12:08.484759168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ll8dl,Uid:5dd74982-3b3d-4f17-955c-aa7334cd7c5a,Namespace:kube-system,Attempt:0,}" May 15 12:12:08.491584 containerd[1522]: time="2025-05-15T12:12:08.491537300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7b9b4746-mr7dx,Uid:277b8b69-903f-40cf-a8b0-2af268672361,Namespace:calico-apiserver,Attempt:0,}" May 15 12:12:08.493234 containerd[1522]: time="2025-05-15T12:12:08.493121928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mjw8m,Uid:ec08a749-bfa9-479d-ad96-3a7485e9dad7,Namespace:kube-system,Attempt:0,}" May 15 12:12:08.770648 containerd[1522]: time="2025-05-15T12:12:08.770345008Z" level=error msg="Failed to destroy network for sandbox \"e7dd46aef52751af1bc30306578d46afe8b7bf9ce7f320542a028ddb5fe68b88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.774831 containerd[1522]: time="2025-05-15T12:12:08.774293880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ll8dl,Uid:5dd74982-3b3d-4f17-955c-aa7334cd7c5a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7dd46aef52751af1bc30306578d46afe8b7bf9ce7f320542a028ddb5fe68b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.776632 kubelet[2625]: E0515 12:12:08.776566 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7dd46aef52751af1bc30306578d46afe8b7bf9ce7f320542a028ddb5fe68b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.776722 kubelet[2625]: E0515 12:12:08.776681 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7dd46aef52751af1bc30306578d46afe8b7bf9ce7f320542a028ddb5fe68b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ll8dl" May 15 12:12:08.776722 kubelet[2625]: E0515 12:12:08.776701 2625 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7dd46aef52751af1bc30306578d46afe8b7bf9ce7f320542a028ddb5fe68b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ll8dl" May 15 12:12:08.777306 kubelet[2625]: E0515 12:12:08.777255 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ll8dl_kube-system(5dd74982-3b3d-4f17-955c-aa7334cd7c5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ll8dl_kube-system(5dd74982-3b3d-4f17-955c-aa7334cd7c5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7dd46aef52751af1bc30306578d46afe8b7bf9ce7f320542a028ddb5fe68b88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ll8dl" podUID="5dd74982-3b3d-4f17-955c-aa7334cd7c5a" May 15 12:12:08.777598 containerd[1522]: time="2025-05-15T12:12:08.777565242Z" level=error msg="Failed to destroy network for sandbox \"3f5dc6bdf877c9c67b74a496dafe0c07e0b1884e35d0c4811fb8016ea06422bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.778554 containerd[1522]: time="2025-05-15T12:12:08.778462362Z" level=error msg="Failed to destroy network for sandbox \"c69fd3ca1eaf3503e2d4d06f2afb469ef6b5b26a2756c23cb409f8e0dd9214c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.779240 containerd[1522]: time="2025-05-15T12:12:08.779201263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6588b76679-g7dkc,Uid:033e6c41-c7e1-4f89-99b1-69274c65f4ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5dc6bdf877c9c67b74a496dafe0c07e0b1884e35d0c4811fb8016ea06422bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.779435 kubelet[2625]: E0515 12:12:08.779397 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5dc6bdf877c9c67b74a496dafe0c07e0b1884e35d0c4811fb8016ea06422bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.779475 kubelet[2625]: E0515 12:12:08.779447 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5dc6bdf877c9c67b74a496dafe0c07e0b1884e35d0c4811fb8016ea06422bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6588b76679-g7dkc" May 15 12:12:08.779475 kubelet[2625]: E0515 12:12:08.779466 2625 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5dc6bdf877c9c67b74a496dafe0c07e0b1884e35d0c4811fb8016ea06422bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6588b76679-g7dkc" May 15 12:12:08.779592 kubelet[2625]: E0515 12:12:08.779561 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6588b76679-g7dkc_calico-system(033e6c41-c7e1-4f89-99b1-69274c65f4ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6588b76679-g7dkc_calico-system(033e6c41-c7e1-4f89-99b1-69274c65f4ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f5dc6bdf877c9c67b74a496dafe0c07e0b1884e35d0c4811fb8016ea06422bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6588b76679-g7dkc" podUID="033e6c41-c7e1-4f89-99b1-69274c65f4ef" May 15 12:12:08.781256 containerd[1522]: time="2025-05-15T12:12:08.781169879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl6hb,Uid:6eb3b675-ac76-4d65-8306-ee2c30f0c7f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69fd3ca1eaf3503e2d4d06f2afb469ef6b5b26a2756c23cb409f8e0dd9214c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.781772 containerd[1522]: time="2025-05-15T12:12:08.781733563Z" level=error msg="Failed to destroy network for sandbox \"352066b5110f9f1ad44b8c0043f24378d59740f5f58ffd9e7b9326649c847636\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.782042 kubelet[2625]: E0515 12:12:08.782011 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69fd3ca1eaf3503e2d4d06f2afb469ef6b5b26a2756c23cb409f8e0dd9214c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.782091 kubelet[2625]: E0515 12:12:08.782057 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69fd3ca1eaf3503e2d4d06f2afb469ef6b5b26a2756c23cb409f8e0dd9214c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl6hb" May 15 12:12:08.782122 kubelet[2625]: E0515 12:12:08.782091 2625 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69fd3ca1eaf3503e2d4d06f2afb469ef6b5b26a2756c23cb409f8e0dd9214c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl6hb" May 15 12:12:08.782147 kubelet[2625]: E0515 12:12:08.782127 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl6hb_calico-system(6eb3b675-ac76-4d65-8306-ee2c30f0c7f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl6hb_calico-system(6eb3b675-ac76-4d65-8306-ee2c30f0c7f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c69fd3ca1eaf3503e2d4d06f2afb469ef6b5b26a2756c23cb409f8e0dd9214c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl6hb" podUID="6eb3b675-ac76-4d65-8306-ee2c30f0c7f1" May 15 12:12:08.783331 containerd[1522]: time="2025-05-15T12:12:08.783293635Z" level=error msg="Failed to destroy network for sandbox \"babd3f29f0dac9e8970e77f4113bfca12058345aefee44f5f14a21e3cca1bac8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.783648 containerd[1522]: time="2025-05-15T12:12:08.783607913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7b9b4746-mr7dx,Uid:277b8b69-903f-40cf-a8b0-2af268672361,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"352066b5110f9f1ad44b8c0043f24378d59740f5f58ffd9e7b9326649c847636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.783872 kubelet[2625]: E0515 12:12:08.783844 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"352066b5110f9f1ad44b8c0043f24378d59740f5f58ffd9e7b9326649c847636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.783932 kubelet[2625]: E0515 12:12:08.783884 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"352066b5110f9f1ad44b8c0043f24378d59740f5f58ffd9e7b9326649c847636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c7b9b4746-mr7dx" May 15 12:12:08.783932 kubelet[2625]: E0515 12:12:08.783901 2625 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"352066b5110f9f1ad44b8c0043f24378d59740f5f58ffd9e7b9326649c847636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c7b9b4746-mr7dx" May 15 12:12:08.783977 kubelet[2625]: E0515 12:12:08.783954 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c7b9b4746-mr7dx_calico-apiserver(277b8b69-903f-40cf-a8b0-2af268672361)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c7b9b4746-mr7dx_calico-apiserver(277b8b69-903f-40cf-a8b0-2af268672361)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"352066b5110f9f1ad44b8c0043f24378d59740f5f58ffd9e7b9326649c847636\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c7b9b4746-mr7dx" podUID="277b8b69-903f-40cf-a8b0-2af268672361" May 15 12:12:08.784372 containerd[1522]: time="2025-05-15T12:12:08.784214911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7b9b4746-x7jtk,Uid:c2182c51-05db-4f74-8a4c-820fa5a2345e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd3f29f0dac9e8970e77f4113bfca12058345aefee44f5f14a21e3cca1bac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.784372 containerd[1522]: time="2025-05-15T12:12:08.784294341Z" level=error msg="Failed to destroy network for sandbox \"f5bff6f5609cb7340d008a11b0c929ceb92146672988655b4be9a313ae53368a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.784495 kubelet[2625]: E0515 12:12:08.784340 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd3f29f0dac9e8970e77f4113bfca12058345aefee44f5f14a21e3cca1bac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.784495 kubelet[2625]: E0515 12:12:08.784370 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd3f29f0dac9e8970e77f4113bfca12058345aefee44f5f14a21e3cca1bac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c7b9b4746-x7jtk" May 15 12:12:08.784495 kubelet[2625]: E0515 12:12:08.784421 2625 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd3f29f0dac9e8970e77f4113bfca12058345aefee44f5f14a21e3cca1bac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c7b9b4746-x7jtk" May 15 12:12:08.784576 kubelet[2625]: E0515 12:12:08.784454 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c7b9b4746-x7jtk_calico-apiserver(c2182c51-05db-4f74-8a4c-820fa5a2345e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c7b9b4746-x7jtk_calico-apiserver(c2182c51-05db-4f74-8a4c-820fa5a2345e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"babd3f29f0dac9e8970e77f4113bfca12058345aefee44f5f14a21e3cca1bac8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c7b9b4746-x7jtk" podUID="c2182c51-05db-4f74-8a4c-820fa5a2345e" May 15 12:12:08.785916 containerd[1522]: time="2025-05-15T12:12:08.785873849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mjw8m,Uid:ec08a749-bfa9-479d-ad96-3a7485e9dad7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bff6f5609cb7340d008a11b0c929ceb92146672988655b4be9a313ae53368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.786215 kubelet[2625]: E0515 12:12:08.786159 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bff6f5609cb7340d008a11b0c929ceb92146672988655b4be9a313ae53368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:12:08.786215 kubelet[2625]: E0515 12:12:08.786211 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bff6f5609cb7340d008a11b0c929ceb92146672988655b4be9a313ae53368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-mjw8m" May 15 12:12:08.786280 kubelet[2625]: E0515 12:12:08.786226 2625 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bff6f5609cb7340d008a11b0c929ceb92146672988655b4be9a313ae53368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-mjw8m" May 15 12:12:08.786280 kubelet[2625]: E0515 12:12:08.786263 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-mjw8m_kube-system(ec08a749-bfa9-479d-ad96-3a7485e9dad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-mjw8m_kube-system(ec08a749-bfa9-479d-ad96-3a7485e9dad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5bff6f5609cb7340d008a11b0c929ceb92146672988655b4be9a313ae53368a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mjw8m" podUID="ec08a749-bfa9-479d-ad96-3a7485e9dad7" May 15 12:12:09.291295 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:35972.service - OpenSSH per-connection server daemon (10.0.0.1:35972). May 15 12:12:09.356457 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 35972 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:09.357804 sshd-session[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:09.362296 systemd-logind[1503]: New session 9 of user core. May 15 12:12:09.368378 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 12:12:09.440381 systemd[1]: run-netns-cni\x2dc25f343f\x2db8cb\x2d7665\x2d6252\x2d4cad7e726c53.mount: Deactivated successfully. May 15 12:12:09.440682 systemd[1]: run-netns-cni\x2d33080a8d\x2db319\x2de47c\x2d6ddf\x2dd2ff4fa701f0.mount: Deactivated successfully. May 15 12:12:09.440749 systemd[1]: run-netns-cni\x2d158fce23\x2d9ff1\x2d2f53\x2d73a5\x2de695154c5a76.mount: Deactivated successfully. May 15 12:12:09.440793 systemd[1]: run-netns-cni\x2da498ce6b\x2d69e0\x2d3ff5\x2dc149\x2d5fe76280e38b.mount: Deactivated successfully. May 15 12:12:09.440836 systemd[1]: run-netns-cni\x2d3a3decd5\x2d2cb7\x2dbcc8\x2d9722\x2d2b76e54f03be.mount: Deactivated successfully. May 15 12:12:09.440891 systemd[1]: run-netns-cni\x2da3419aa6\x2d44e2\x2d575c\x2deab7\x2deb9942d65bb5.mount: Deactivated successfully. May 15 12:12:09.485569 sshd[3565]: Connection closed by 10.0.0.1 port 35972 May 15 12:12:09.485946 sshd-session[3563]: pam_unix(sshd:session): session closed for user core May 15 12:12:09.489869 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:35972.service: Deactivated successfully. May 15 12:12:09.492089 systemd[1]: session-9.scope: Deactivated successfully. May 15 12:12:09.494784 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. May 15 12:12:09.496755 systemd-logind[1503]: Removed session 9. May 15 12:12:14.499283 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:35584.service - OpenSSH per-connection server daemon (10.0.0.1:35584). May 15 12:12:14.570773 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 35584 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:14.572697 sshd-session[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:14.585117 systemd-logind[1503]: New session 10 of user core. May 15 12:12:14.593414 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 12:12:14.601156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742131440.mount: Deactivated successfully. May 15 12:12:14.876569 containerd[1522]: time="2025-05-15T12:12:14.876486798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:14.877793 containerd[1522]: time="2025-05-15T12:12:14.877762376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 12:12:14.879558 containerd[1522]: time="2025-05-15T12:12:14.879523382Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:14.882887 containerd[1522]: time="2025-05-15T12:12:14.882848934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:14.885009 containerd[1522]: time="2025-05-15T12:12:14.884973418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.410024617s" May 15 12:12:14.885062 containerd[1522]: time="2025-05-15T12:12:14.885007895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 12:12:14.896229 sshd[3588]: Connection closed by 10.0.0.1 port 35584 May 15 12:12:14.896519 containerd[1522]: time="2025-05-15T12:12:14.896268128Z" level=info msg="CreateContainer within sandbox \"c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 12:12:14.896544 sshd-session[3586]: pam_unix(sshd:session): session closed for user core May 15 12:12:14.900590 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:35584.service: Deactivated successfully. May 15 12:12:14.902458 systemd[1]: session-10.scope: Deactivated successfully. May 15 12:12:14.905260 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. May 15 12:12:14.907481 systemd-logind[1503]: Removed session 10. May 15 12:12:14.916834 containerd[1522]: time="2025-05-15T12:12:14.916374663Z" level=info msg="Container 61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:14.925789 containerd[1522]: time="2025-05-15T12:12:14.925742266Z" level=info msg="CreateContainer within sandbox \"c0d2088b6bbdc34f2e03b3967793efdcea2c1753c35111b122b414d3f40d0674\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d\"" May 15 12:12:14.926643 containerd[1522]: time="2025-05-15T12:12:14.926537818Z" level=info msg="StartContainer for \"61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d\"" May 15 12:12:14.928463 containerd[1522]: time="2025-05-15T12:12:14.928429369Z" level=info msg="connecting to shim 61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d" address="unix:///run/containerd/s/360931ffb1b5ff10174b47735a5f4cb96a5418af62482d332f82b022691cedf1" protocol=ttrpc version=3 May 15 12:12:14.947361 systemd[1]: Started cri-containerd-61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d.scope - libcontainer container 61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d. May 15 12:12:14.986312 containerd[1522]: time="2025-05-15T12:12:14.986275527Z" level=info msg="StartContainer for \"61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d\" returns successfully" May 15 12:12:15.186220 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 12:12:15.186323 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 12:12:15.591097 containerd[1522]: time="2025-05-15T12:12:15.591051476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d\" id:\"63da091849104dc73a2366d363bfe4dd9c85b5f46f7236281349227d9861dfd6\" pid:3678 exit_status:1 exited_at:{seconds:1747311135 nanos:590743429}" May 15 12:12:16.647444 containerd[1522]: time="2025-05-15T12:12:16.647291679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d\" id:\"186f95567ff286178f537cf9579e82df437d97bbc60ba4f899c4b2a1bd62eb0b\" pid:3768 exit_status:1 exited_at:{seconds:1747311136 nanos:646970752}" May 15 12:12:16.897667 systemd-networkd[1440]: vxlan.calico: Link UP May 15 12:12:16.897673 systemd-networkd[1440]: vxlan.calico: Gained carrier May 15 12:12:17.952498 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL May 15 12:12:19.377803 containerd[1522]: time="2025-05-15T12:12:19.377689764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7b9b4746-x7jtk,Uid:c2182c51-05db-4f74-8a4c-820fa5a2345e,Namespace:calico-apiserver,Attempt:0,}" May 15 12:12:19.378169 containerd[1522]: time="2025-05-15T12:12:19.377689844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7b9b4746-mr7dx,Uid:277b8b69-903f-40cf-a8b0-2af268672361,Namespace:calico-apiserver,Attempt:0,}" May 15 12:12:19.737670 systemd-networkd[1440]: cali2bf66b527f6: Link UP May 15 12:12:19.738494 systemd-networkd[1440]: cali2bf66b527f6: Gained carrier May 15 12:12:19.752101 containerd[1522]: 2025-05-15 12:12:19.508 [INFO][3923] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0 calico-apiserver-c7b9b4746- calico-apiserver c2182c51-05db-4f74-8a4c-820fa5a2345e 769 0 2025-05-15 12:11:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c7b9b4746 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c7b9b4746-x7jtk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2bf66b527f6 [] []}} ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-x7jtk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-" May 15 12:12:19.752101 containerd[1522]: 2025-05-15 12:12:19.508 [INFO][3923] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-x7jtk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" May 15 12:12:19.752101 containerd[1522]: 2025-05-15 12:12:19.675 [INFO][3952] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" HandleID="k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Workload="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.694 [INFO][3952] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" HandleID="k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Workload="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001adf20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c7b9b4746-x7jtk", "timestamp":"2025-05-15 12:12:19.675683946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.694 [INFO][3952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.694 [INFO][3952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.695 [INFO][3952] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.697 [INFO][3952] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" host="localhost" May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.706 [INFO][3952] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.713 [INFO][3952] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.715 [INFO][3952] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.717 [INFO][3952] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:12:19.752855 containerd[1522]: 2025-05-15 12:12:19.717 [INFO][3952] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" host="localhost" May 15 12:12:19.753087 containerd[1522]: 2025-05-15 12:12:19.719 [INFO][3952] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8 May 15 12:12:19.753087 containerd[1522]: 2025-05-15 12:12:19.722 [INFO][3952] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" host="localhost" May 15 12:12:19.753087 containerd[1522]: 2025-05-15 12:12:19.728 [INFO][3952] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" host="localhost" May 15 12:12:19.753087 containerd[1522]: 2025-05-15 12:12:19.728 [INFO][3952] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" host="localhost" May 15 12:12:19.753087 containerd[1522]: 2025-05-15 12:12:19.728 [INFO][3952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:12:19.753087 containerd[1522]: 2025-05-15 12:12:19.728 [INFO][3952] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" HandleID="k8s-pod-network.276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Workload="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" May 15 12:12:19.753249 containerd[1522]: 2025-05-15 12:12:19.733 [INFO][3923] cni-plugin/k8s.go 386: Populated endpoint ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-x7jtk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0", GenerateName:"calico-apiserver-c7b9b4746-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2182c51-05db-4f74-8a4c-820fa5a2345e", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7b9b4746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c7b9b4746-x7jtk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2bf66b527f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:19.753313 containerd[1522]: 2025-05-15 12:12:19.733 [INFO][3923] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-x7jtk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" May 15 12:12:19.753313 containerd[1522]: 2025-05-15 12:12:19.733 [INFO][3923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bf66b527f6 ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-x7jtk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" May 15 12:12:19.753313 containerd[1522]: 2025-05-15 12:12:19.738 [INFO][3923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-x7jtk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" May 15 12:12:19.753379 containerd[1522]: 2025-05-15 12:12:19.738 [INFO][3923] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-x7jtk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0", GenerateName:"calico-apiserver-c7b9b4746-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2182c51-05db-4f74-8a4c-820fa5a2345e", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7b9b4746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8", Pod:"calico-apiserver-c7b9b4746-x7jtk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2bf66b527f6", MAC:"1a:5c:ac:10:f9:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:19.753427 containerd[1522]: 2025-05-15 12:12:19.748 [INFO][3923] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-x7jtk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--x7jtk-eth0" May 15 12:12:19.765225 kubelet[2625]: I0515 12:12:19.764955 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tt2hq" podStartSLOduration=6.319144487 podStartE2EDuration="33.764934478s" podCreationTimestamp="2025-05-15 12:11:46 +0000 UTC" firstStartedPulling="2025-05-15 12:11:47.440466045 +0000 UTC m=+15.157011925" lastFinishedPulling="2025-05-15 12:12:14.886256036 +0000 UTC m=+42.602801916" observedRunningTime="2025-05-15 12:12:15.517520559 +0000 UTC m=+43.234066439" watchObservedRunningTime="2025-05-15 12:12:19.764934478 +0000 UTC m=+47.481480318" May 15 12:12:19.834381 systemd-networkd[1440]: calied0456c20a6: Link UP May 15 12:12:19.835235 systemd-networkd[1440]: calied0456c20a6: Gained carrier May 15 12:12:19.840354 containerd[1522]: time="2025-05-15T12:12:19.840302362Z" level=info msg="connecting to shim 276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8" address="unix:///run/containerd/s/32f833b4666d65885aea2608115edc6000dead23b53083218b452e06ee51db39" namespace=k8s.io protocol=ttrpc version=3 May 15 12:12:19.857289 containerd[1522]: 2025-05-15 12:12:19.508 [INFO][3926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0 calico-apiserver-c7b9b4746- calico-apiserver 277b8b69-903f-40cf-a8b0-2af268672361 771 0 2025-05-15 12:11:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c7b9b4746 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c7b9b4746-mr7dx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied0456c20a6 [] []}} ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-mr7dx" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-" May 15 12:12:19.857289 containerd[1522]: 2025-05-15 12:12:19.508 [INFO][3926] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-mr7dx" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" May 15 12:12:19.857289 containerd[1522]: 2025-05-15 12:12:19.675 [INFO][3953] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" HandleID="k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Workload="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.694 [INFO][3953] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" HandleID="k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Workload="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000133e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c7b9b4746-mr7dx", "timestamp":"2025-05-15 12:12:19.675600834 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.694 [INFO][3953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.728 [INFO][3953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.728 [INFO][3953] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.798 [INFO][3953] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" host="localhost" May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.803 [INFO][3953] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.812 [INFO][3953] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.814 [INFO][3953] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.816 [INFO][3953] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:12:19.857563 containerd[1522]: 2025-05-15 12:12:19.816 [INFO][3953] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" host="localhost" May 15 12:12:19.857778 containerd[1522]: 2025-05-15 12:12:19.818 [INFO][3953] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b May 15 12:12:19.857778 containerd[1522]: 2025-05-15 12:12:19.822 [INFO][3953] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" host="localhost" May 15 12:12:19.857778 containerd[1522]: 2025-05-15 12:12:19.829 [INFO][3953] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" host="localhost" May 15 12:12:19.857778 containerd[1522]: 2025-05-15 12:12:19.829 [INFO][3953] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" host="localhost" May 15 12:12:19.857778 containerd[1522]: 2025-05-15 12:12:19.829 [INFO][3953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:12:19.857778 containerd[1522]: 2025-05-15 12:12:19.829 [INFO][3953] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" HandleID="k8s-pod-network.42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Workload="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" May 15 12:12:19.858430 containerd[1522]: 2025-05-15 12:12:19.831 [INFO][3926] cni-plugin/k8s.go 386: Populated endpoint ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-mr7dx" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0", GenerateName:"calico-apiserver-c7b9b4746-", Namespace:"calico-apiserver", SelfLink:"", UID:"277b8b69-903f-40cf-a8b0-2af268672361", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7b9b4746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c7b9b4746-mr7dx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0456c20a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:19.858493 containerd[1522]: 2025-05-15 12:12:19.832 [INFO][3926] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-mr7dx" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" May 15 12:12:19.858493 containerd[1522]: 2025-05-15 12:12:19.832 [INFO][3926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied0456c20a6 ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-mr7dx" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" May 15 12:12:19.858493 containerd[1522]: 2025-05-15 12:12:19.835 [INFO][3926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-mr7dx" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" May 15 12:12:19.858710 containerd[1522]: 2025-05-15 12:12:19.836 [INFO][3926] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-mr7dx" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0", GenerateName:"calico-apiserver-c7b9b4746-", Namespace:"calico-apiserver", SelfLink:"", UID:"277b8b69-903f-40cf-a8b0-2af268672361", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7b9b4746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b", Pod:"calico-apiserver-c7b9b4746-mr7dx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0456c20a6", MAC:"d2:83:23:fd:9e:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:19.858856 containerd[1522]: 2025-05-15 12:12:19.849 [INFO][3926] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" Namespace="calico-apiserver" Pod="calico-apiserver-c7b9b4746-mr7dx" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7b9b4746--mr7dx-eth0" May 15 12:12:19.880129 containerd[1522]: time="2025-05-15T12:12:19.880088165Z" level=info msg="connecting to shim 42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b" address="unix:///run/containerd/s/af6c48abe2cb0fb4785e4f96651b99107fd0e1e113f62ae446b88e87300100fc" namespace=k8s.io protocol=ttrpc version=3 May 15 12:12:19.889396 systemd[1]: Started cri-containerd-276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8.scope - libcontainer container 276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8. May 15 12:12:19.903569 systemd[1]: Started cri-containerd-42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b.scope - libcontainer container 42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b. May 15 12:12:19.905063 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:35596.service - OpenSSH per-connection server daemon (10.0.0.1:35596). May 15 12:12:19.911424 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:12:19.918116 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:12:19.944255 containerd[1522]: time="2025-05-15T12:12:19.944199551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7b9b4746-x7jtk,Uid:c2182c51-05db-4f74-8a4c-820fa5a2345e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8\"" May 15 12:12:19.952965 containerd[1522]: time="2025-05-15T12:12:19.952933047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7b9b4746-mr7dx,Uid:277b8b69-903f-40cf-a8b0-2af268672361,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b\"" May 15 12:12:19.956356 containerd[1522]: time="2025-05-15T12:12:19.956247494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:12:19.958722 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 35596 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:19.960150 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:19.965754 systemd-logind[1503]: New session 11 of user core. May 15 12:12:19.972367 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 12:12:20.090961 sshd[4094]: Connection closed by 10.0.0.1 port 35596 May 15 12:12:20.091931 sshd-session[4070]: pam_unix(sshd:session): session closed for user core May 15 12:12:20.099247 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:35596.service: Deactivated successfully. May 15 12:12:20.102569 systemd[1]: session-11.scope: Deactivated successfully. May 15 12:12:20.104079 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. May 15 12:12:20.107610 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:35604.service - OpenSSH per-connection server daemon (10.0.0.1:35604). May 15 12:12:20.108516 systemd-logind[1503]: Removed session 11. May 15 12:12:20.157182 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 35604 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:20.158672 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:20.163360 systemd-logind[1503]: New session 12 of user core. May 15 12:12:20.172381 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 12:12:20.321631 sshd[4111]: Connection closed by 10.0.0.1 port 35604 May 15 12:12:20.323404 sshd-session[4109]: pam_unix(sshd:session): session closed for user core May 15 12:12:20.332065 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:35604.service: Deactivated successfully. May 15 12:12:20.334933 systemd[1]: session-12.scope: Deactivated successfully. May 15 12:12:20.340282 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. May 15 12:12:20.341959 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:35612.service - OpenSSH per-connection server daemon (10.0.0.1:35612). May 15 12:12:20.345179 systemd-logind[1503]: Removed session 12. May 15 12:12:20.396885 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 35612 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:20.398684 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:20.406235 systemd-logind[1503]: New session 13 of user core. May 15 12:12:20.412559 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 12:12:20.559828 sshd[4125]: Connection closed by 10.0.0.1 port 35612 May 15 12:12:20.560403 sshd-session[4123]: pam_unix(sshd:session): session closed for user core May 15 12:12:20.564163 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:35612.service: Deactivated successfully. May 15 12:12:20.566429 systemd[1]: session-13.scope: Deactivated successfully. May 15 12:12:20.567349 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. May 15 12:12:20.568641 systemd-logind[1503]: Removed session 13. May 15 12:12:21.152331 systemd-networkd[1440]: cali2bf66b527f6: Gained IPv6LL May 15 12:12:21.216314 systemd-networkd[1440]: calied0456c20a6: Gained IPv6LL May 15 12:12:21.378379 containerd[1522]: time="2025-05-15T12:12:21.378235927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6588b76679-g7dkc,Uid:033e6c41-c7e1-4f89-99b1-69274c65f4ef,Namespace:calico-system,Attempt:0,}" May 15 12:12:21.378720 containerd[1522]: time="2025-05-15T12:12:21.378482585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl6hb,Uid:6eb3b675-ac76-4d65-8306-ee2c30f0c7f1,Namespace:calico-system,Attempt:0,}" May 15 12:12:21.378720 containerd[1522]: time="2025-05-15T12:12:21.378574017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ll8dl,Uid:5dd74982-3b3d-4f17-955c-aa7334cd7c5a,Namespace:kube-system,Attempt:0,}" May 15 12:12:21.507784 systemd-networkd[1440]: cali725755bc943: Link UP May 15 12:12:21.508258 systemd-networkd[1440]: cali725755bc943: Gained carrier May 15 12:12:21.522715 containerd[1522]: 2025-05-15 12:12:21.427 [INFO][4149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jl6hb-eth0 csi-node-driver- calico-system 6eb3b675-ac76-4d65-8306-ee2c30f0c7f1 616 0 2025-05-15 12:11:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jl6hb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali725755bc943 [] []}} ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Namespace="calico-system" Pod="csi-node-driver-jl6hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl6hb-" May 15 12:12:21.522715 containerd[1522]: 2025-05-15 12:12:21.427 [INFO][4149] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Namespace="calico-system" Pod="csi-node-driver-jl6hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl6hb-eth0" May 15 12:12:21.522715 containerd[1522]: 2025-05-15 12:12:21.461 [INFO][4182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" HandleID="k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Workload="localhost-k8s-csi--node--driver--jl6hb-eth0" May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.475 [INFO][4182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" HandleID="k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Workload="localhost-k8s-csi--node--driver--jl6hb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000680680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jl6hb", "timestamp":"2025-05-15 12:12:21.461245211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.476 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.476 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.476 [INFO][4182] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.479 [INFO][4182] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" host="localhost" May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.483 [INFO][4182] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.489 [INFO][4182] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.490 [INFO][4182] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.492 [INFO][4182] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:12:21.523142 containerd[1522]: 2025-05-15 12:12:21.493 [INFO][4182] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" host="localhost" May 15 12:12:21.523494 containerd[1522]: 2025-05-15 12:12:21.494 [INFO][4182] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44 May 15 12:12:21.523494 containerd[1522]: 2025-05-15 12:12:21.497 [INFO][4182] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" host="localhost" May 15 12:12:21.523494 containerd[1522]: 2025-05-15 12:12:21.502 [INFO][4182] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" host="localhost" May 15 12:12:21.523494 containerd[1522]: 2025-05-15 12:12:21.503 [INFO][4182] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" host="localhost" May 15 12:12:21.523494 containerd[1522]: 2025-05-15 12:12:21.503 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:12:21.523494 containerd[1522]: 2025-05-15 12:12:21.503 [INFO][4182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" HandleID="k8s-pod-network.265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Workload="localhost-k8s-csi--node--driver--jl6hb-eth0" May 15 12:12:21.523623 containerd[1522]: 2025-05-15 12:12:21.505 [INFO][4149] cni-plugin/k8s.go 386: Populated endpoint ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Namespace="calico-system" Pod="csi-node-driver-jl6hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl6hb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jl6hb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6eb3b675-ac76-4d65-8306-ee2c30f0c7f1", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jl6hb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali725755bc943", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:21.523623 containerd[1522]: 2025-05-15 12:12:21.505 [INFO][4149] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Namespace="calico-system" Pod="csi-node-driver-jl6hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl6hb-eth0" May 15 12:12:21.523734 containerd[1522]: 2025-05-15 12:12:21.505 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali725755bc943 ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Namespace="calico-system" Pod="csi-node-driver-jl6hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl6hb-eth0" May 15 12:12:21.523734 containerd[1522]: 2025-05-15 12:12:21.508 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Namespace="calico-system" Pod="csi-node-driver-jl6hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl6hb-eth0" May 15 12:12:21.523804 containerd[1522]: 2025-05-15 12:12:21.509 [INFO][4149] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Namespace="calico-system" Pod="csi-node-driver-jl6hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl6hb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jl6hb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6eb3b675-ac76-4d65-8306-ee2c30f0c7f1", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44", Pod:"csi-node-driver-jl6hb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali725755bc943", MAC:"7a:ab:f4:39:86:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:21.523858 containerd[1522]: 2025-05-15 12:12:21.519 [INFO][4149] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" Namespace="calico-system" Pod="csi-node-driver-jl6hb" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl6hb-eth0" May 15 12:12:21.545855 containerd[1522]: time="2025-05-15T12:12:21.545799718Z" level=info msg="connecting to shim 265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44" address="unix:///run/containerd/s/29d897290df11f4b449469e831be66e62e02b6c2d892419c22a4cbfc65bb4b21" namespace=k8s.io protocol=ttrpc version=3 May 15 12:12:21.570355 systemd[1]: Started cri-containerd-265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44.scope - libcontainer container 265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44. May 15 12:12:21.581484 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:12:21.598049 containerd[1522]: time="2025-05-15T12:12:21.597996973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl6hb,Uid:6eb3b675-ac76-4d65-8306-ee2c30f0c7f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44\"" May 15 12:12:21.614372 systemd-networkd[1440]: cali3cdfb115dea: Link UP May 15 12:12:21.614524 systemd-networkd[1440]: cali3cdfb115dea: Gained carrier May 15 12:12:21.625980 containerd[1522]: 2025-05-15 12:12:21.433 [INFO][4156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0 coredns-6f6b679f8f- kube-system 5dd74982-3b3d-4f17-955c-aa7334cd7c5a 768 0 2025-05-15 12:11:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-ll8dl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3cdfb115dea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-ll8dl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ll8dl-" May 15 12:12:21.625980 containerd[1522]: 2025-05-15 12:12:21.433 [INFO][4156] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-ll8dl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" May 15 12:12:21.625980 containerd[1522]: 2025-05-15 12:12:21.463 [INFO][4189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" HandleID="k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Workload="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.478 [INFO][4189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" HandleID="k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Workload="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000391090), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-ll8dl", "timestamp":"2025-05-15 12:12:21.463097447 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.478 [INFO][4189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.503 [INFO][4189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.503 [INFO][4189] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.579 [INFO][4189] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" host="localhost" May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.586 [INFO][4189] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.591 [INFO][4189] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.593 [INFO][4189] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.596 [INFO][4189] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:12:21.626328 containerd[1522]: 2025-05-15 12:12:21.596 [INFO][4189] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" host="localhost" May 15 12:12:21.626551 containerd[1522]: 2025-05-15 12:12:21.597 [INFO][4189] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a May 15 12:12:21.626551 containerd[1522]: 2025-05-15 12:12:21.601 [INFO][4189] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" host="localhost" May 15 12:12:21.626551 containerd[1522]: 2025-05-15 12:12:21.607 [INFO][4189] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" host="localhost" May 15 12:12:21.626551 containerd[1522]: 2025-05-15 12:12:21.608 [INFO][4189] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" host="localhost" May 15 12:12:21.626551 containerd[1522]: 2025-05-15 12:12:21.608 [INFO][4189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:12:21.626551 containerd[1522]: 2025-05-15 12:12:21.608 [INFO][4189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" HandleID="k8s-pod-network.5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Workload="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" May 15 12:12:21.626704 containerd[1522]: 2025-05-15 12:12:21.611 [INFO][4156] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-ll8dl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5dd74982-3b3d-4f17-955c-aa7334cd7c5a", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-ll8dl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cdfb115dea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:21.626758 containerd[1522]: 2025-05-15 12:12:21.612 [INFO][4156] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-ll8dl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" May 15 12:12:21.626758 containerd[1522]: 2025-05-15 12:12:21.612 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cdfb115dea ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-ll8dl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" May 15 12:12:21.626758 containerd[1522]: 2025-05-15 12:12:21.614 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-ll8dl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" May 15 12:12:21.626821 containerd[1522]: 2025-05-15 12:12:21.614 [INFO][4156] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-ll8dl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5dd74982-3b3d-4f17-955c-aa7334cd7c5a", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a", Pod:"coredns-6f6b679f8f-ll8dl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cdfb115dea", MAC:"1e:7d:93:19:a4:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:21.626821 containerd[1522]: 2025-05-15 12:12:21.622 [INFO][4156] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-ll8dl" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ll8dl-eth0" May 15 12:12:21.650312 containerd[1522]: time="2025-05-15T12:12:21.650223785Z" level=info msg="connecting to shim 5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a" address="unix:///run/containerd/s/72f6a83c08af06baa6a2c27336f35a8891685538b7e6e77b7f39a0d2d458200b" namespace=k8s.io protocol=ttrpc version=3 May 15 12:12:21.679349 systemd[1]: Started cri-containerd-5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a.scope - libcontainer container 5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a. May 15 12:12:21.694547 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:12:21.716248 systemd-networkd[1440]: calibda385dc656: Link UP May 15 12:12:21.716460 systemd-networkd[1440]: calibda385dc656: Gained carrier May 15 12:12:21.727921 containerd[1522]: time="2025-05-15T12:12:21.727881303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ll8dl,Uid:5dd74982-3b3d-4f17-955c-aa7334cd7c5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a\"" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.435 [INFO][4138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0 calico-kube-controllers-6588b76679- calico-system 033e6c41-c7e1-4f89-99b1-69274c65f4ef 765 0 2025-05-15 12:11:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6588b76679 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6588b76679-g7dkc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibda385dc656 [] []}} ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Namespace="calico-system" Pod="calico-kube-controllers-6588b76679-g7dkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.435 [INFO][4138] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Namespace="calico-system" Pod="calico-kube-controllers-6588b76679-g7dkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.475 [INFO][4191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" HandleID="k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Workload="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.489 [INFO][4191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" HandleID="k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Workload="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003914a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6588b76679-g7dkc", "timestamp":"2025-05-15 12:12:21.475469631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.489 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.608 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.609 [INFO][4191] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.680 [INFO][4191] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.686 [INFO][4191] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.691 [INFO][4191] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.693 [INFO][4191] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.696 [INFO][4191] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.696 [INFO][4191] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.698 [INFO][4191] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44 May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.701 [INFO][4191] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.708 [INFO][4191] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.708 [INFO][4191] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" host="localhost" May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.708 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:12:21.731274 containerd[1522]: 2025-05-15 12:12:21.708 [INFO][4191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" HandleID="k8s-pod-network.91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Workload="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" May 15 12:12:21.732096 containerd[1522]: 2025-05-15 12:12:21.712 [INFO][4138] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Namespace="calico-system" Pod="calico-kube-controllers-6588b76679-g7dkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0", GenerateName:"calico-kube-controllers-6588b76679-", Namespace:"calico-system", SelfLink:"", UID:"033e6c41-c7e1-4f89-99b1-69274c65f4ef", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6588b76679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6588b76679-g7dkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibda385dc656", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:21.732096 containerd[1522]: 2025-05-15 12:12:21.712 [INFO][4138] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Namespace="calico-system" Pod="calico-kube-controllers-6588b76679-g7dkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" May 15 12:12:21.732096 containerd[1522]: 2025-05-15 12:12:21.712 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibda385dc656 ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Namespace="calico-system" Pod="calico-kube-controllers-6588b76679-g7dkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" May 15 12:12:21.732096 containerd[1522]: 2025-05-15 12:12:21.716 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Namespace="calico-system" Pod="calico-kube-controllers-6588b76679-g7dkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" May 15 12:12:21.732096 containerd[1522]: 2025-05-15 12:12:21.716 [INFO][4138] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Namespace="calico-system" Pod="calico-kube-controllers-6588b76679-g7dkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0", GenerateName:"calico-kube-controllers-6588b76679-", Namespace:"calico-system", SelfLink:"", UID:"033e6c41-c7e1-4f89-99b1-69274c65f4ef", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6588b76679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44", Pod:"calico-kube-controllers-6588b76679-g7dkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibda385dc656", MAC:"56:7d:65:59:c0:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:21.732096 containerd[1522]: 2025-05-15 12:12:21.727 [INFO][4138] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" Namespace="calico-system" Pod="calico-kube-controllers-6588b76679-g7dkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6588b76679--g7dkc-eth0" May 15 12:12:21.737438 containerd[1522]: time="2025-05-15T12:12:21.737394660Z" level=info msg="CreateContainer within sandbox \"5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:12:21.752969 containerd[1522]: time="2025-05-15T12:12:21.752672266Z" level=info msg="connecting to shim 91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44" address="unix:///run/containerd/s/0582565ef9ef69cb2a54a60ca6dcf59b80a52fda627daea9bf7fa08370a6a01c" namespace=k8s.io protocol=ttrpc version=3 May 15 12:12:21.772513 containerd[1522]: time="2025-05-15T12:12:21.771807770Z" level=info msg="Container 518e0a62d149e82b529b8cc08861549e6fc5655f93ba8dc859df9ff4491d3400: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:21.779069 containerd[1522]: time="2025-05-15T12:12:21.778437543Z" level=info msg="CreateContainer within sandbox \"5fdbc3724c346524a4fec52706773bf5b1d63b16b99897ef14f66919811d5e1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"518e0a62d149e82b529b8cc08861549e6fc5655f93ba8dc859df9ff4491d3400\"" May 15 12:12:21.779464 containerd[1522]: time="2025-05-15T12:12:21.779432655Z" level=info msg="StartContainer for \"518e0a62d149e82b529b8cc08861549e6fc5655f93ba8dc859df9ff4491d3400\"" May 15 12:12:21.780341 systemd[1]: Started cri-containerd-91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44.scope - libcontainer container 91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44. May 15 12:12:21.782350 containerd[1522]: time="2025-05-15T12:12:21.782316959Z" level=info msg="connecting to shim 518e0a62d149e82b529b8cc08861549e6fc5655f93ba8dc859df9ff4491d3400" address="unix:///run/containerd/s/72f6a83c08af06baa6a2c27336f35a8891685538b7e6e77b7f39a0d2d458200b" protocol=ttrpc version=3 May 15 12:12:21.803338 systemd[1]: Started cri-containerd-518e0a62d149e82b529b8cc08861549e6fc5655f93ba8dc859df9ff4491d3400.scope - libcontainer container 518e0a62d149e82b529b8cc08861549e6fc5655f93ba8dc859df9ff4491d3400. May 15 12:12:21.809605 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:12:21.831209 containerd[1522]: time="2025-05-15T12:12:21.831064759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6588b76679-g7dkc,Uid:033e6c41-c7e1-4f89-99b1-69274c65f4ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44\"" May 15 12:12:21.854656 containerd[1522]: time="2025-05-15T12:12:21.854612553Z" level=info msg="StartContainer for \"518e0a62d149e82b529b8cc08861549e6fc5655f93ba8dc859df9ff4491d3400\" returns successfully" May 15 12:12:22.543142 kubelet[2625]: I0515 12:12:22.542908 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ll8dl" podStartSLOduration=42.542893304 podStartE2EDuration="42.542893304s" podCreationTimestamp="2025-05-15 12:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:12:22.542616447 +0000 UTC m=+50.259162327" watchObservedRunningTime="2025-05-15 12:12:22.542893304 +0000 UTC m=+50.259439144" May 15 12:12:22.880402 systemd-networkd[1440]: cali725755bc943: Gained IPv6LL May 15 12:12:23.072418 systemd-networkd[1440]: calibda385dc656: Gained IPv6LL May 15 12:12:23.136407 systemd-networkd[1440]: cali3cdfb115dea: Gained IPv6LL May 15 12:12:23.378514 containerd[1522]: time="2025-05-15T12:12:23.378471785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mjw8m,Uid:ec08a749-bfa9-479d-ad96-3a7485e9dad7,Namespace:kube-system,Attempt:0,}" May 15 12:12:23.520505 systemd-networkd[1440]: calie29ba156624: Link UP May 15 12:12:23.520708 systemd-networkd[1440]: calie29ba156624: Gained carrier May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.445 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0 coredns-6f6b679f8f- kube-system ec08a749-bfa9-479d-ad96-3a7485e9dad7 767 0 2025-05-15 12:11:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-mjw8m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie29ba156624 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Namespace="kube-system" Pod="coredns-6f6b679f8f-mjw8m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mjw8m-" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.445 [INFO][4429] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Namespace="kube-system" Pod="coredns-6f6b679f8f-mjw8m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.474 [INFO][4443] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" HandleID="k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Workload="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.485 [INFO][4443] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" HandleID="k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Workload="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2360), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-mjw8m", "timestamp":"2025-05-15 12:12:23.474597191 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.485 [INFO][4443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.485 [INFO][4443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.485 [INFO][4443] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.488 [INFO][4443] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.492 [INFO][4443] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.496 [INFO][4443] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.499 [INFO][4443] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.501 [INFO][4443] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.501 [INFO][4443] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.503 [INFO][4443] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4 May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.507 [INFO][4443] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.514 [INFO][4443] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.514 [INFO][4443] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" host="localhost" May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.514 [INFO][4443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:12:23.539469 containerd[1522]: 2025-05-15 12:12:23.515 [INFO][4443] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" HandleID="k8s-pod-network.37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Workload="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" May 15 12:12:23.541359 containerd[1522]: 2025-05-15 12:12:23.518 [INFO][4429] cni-plugin/k8s.go 386: Populated endpoint ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Namespace="kube-system" Pod="coredns-6f6b679f8f-mjw8m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ec08a749-bfa9-479d-ad96-3a7485e9dad7", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-mjw8m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie29ba156624", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:23.541359 containerd[1522]: 2025-05-15 12:12:23.518 [INFO][4429] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Namespace="kube-system" Pod="coredns-6f6b679f8f-mjw8m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" May 15 12:12:23.541359 containerd[1522]: 2025-05-15 12:12:23.518 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie29ba156624 ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Namespace="kube-system" Pod="coredns-6f6b679f8f-mjw8m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" May 15 12:12:23.541359 containerd[1522]: 2025-05-15 12:12:23.521 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Namespace="kube-system" Pod="coredns-6f6b679f8f-mjw8m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" May 15 12:12:23.541359 containerd[1522]: 2025-05-15 12:12:23.521 [INFO][4429] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Namespace="kube-system" Pod="coredns-6f6b679f8f-mjw8m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ec08a749-bfa9-479d-ad96-3a7485e9dad7", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4", Pod:"coredns-6f6b679f8f-mjw8m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie29ba156624", MAC:"16:f6:9d:fb:08:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:12:23.541359 containerd[1522]: 2025-05-15 12:12:23.535 [INFO][4429] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" Namespace="kube-system" Pod="coredns-6f6b679f8f-mjw8m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--mjw8m-eth0" May 15 12:12:23.586356 containerd[1522]: time="2025-05-15T12:12:23.586217508Z" level=info msg="connecting to shim 37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4" address="unix:///run/containerd/s/5ca8d9cd50597897744df347d2c1c834fba360c1319b4ae350edcd23942b9cb1" namespace=k8s.io protocol=ttrpc version=3 May 15 12:12:23.621613 systemd[1]: Started cri-containerd-37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4.scope - libcontainer container 37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4. May 15 12:12:23.633103 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:12:23.653335 containerd[1522]: time="2025-05-15T12:12:23.653272052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mjw8m,Uid:ec08a749-bfa9-479d-ad96-3a7485e9dad7,Namespace:kube-system,Attempt:0,} returns sandbox id \"37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4\"" May 15 12:12:23.657201 containerd[1522]: time="2025-05-15T12:12:23.657156969Z" level=info msg="CreateContainer within sandbox \"37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:12:23.676986 containerd[1522]: time="2025-05-15T12:12:23.676357132Z" level=info msg="Container 1ff309e19cd68c3507b22780cf5894202bf8b0b1f7c543493e036a6328e87dc6: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:23.677951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947262883.mount: Deactivated successfully. May 15 12:12:23.683305 containerd[1522]: time="2025-05-15T12:12:23.683272397Z" level=info msg="CreateContainer within sandbox \"37e7fac0b1507c3b57ee3571bdc7f4e8fc39e40050ebd35474d4225c6c295ff4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ff309e19cd68c3507b22780cf5894202bf8b0b1f7c543493e036a6328e87dc6\"" May 15 12:12:23.684085 containerd[1522]: time="2025-05-15T12:12:23.684057212Z" level=info msg="StartContainer for \"1ff309e19cd68c3507b22780cf5894202bf8b0b1f7c543493e036a6328e87dc6\"" May 15 12:12:23.685192 containerd[1522]: time="2025-05-15T12:12:23.685157200Z" level=info msg="connecting to shim 1ff309e19cd68c3507b22780cf5894202bf8b0b1f7c543493e036a6328e87dc6" address="unix:///run/containerd/s/5ca8d9cd50597897744df347d2c1c834fba360c1319b4ae350edcd23942b9cb1" protocol=ttrpc version=3 May 15 12:12:23.707366 systemd[1]: Started cri-containerd-1ff309e19cd68c3507b22780cf5894202bf8b0b1f7c543493e036a6328e87dc6.scope - libcontainer container 1ff309e19cd68c3507b22780cf5894202bf8b0b1f7c543493e036a6328e87dc6. May 15 12:12:23.736256 containerd[1522]: time="2025-05-15T12:12:23.735991693Z" level=info msg="StartContainer for \"1ff309e19cd68c3507b22780cf5894202bf8b0b1f7c543493e036a6328e87dc6\" returns successfully" May 15 12:12:24.560080 kubelet[2625]: I0515 12:12:24.559024 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mjw8m" podStartSLOduration=44.559009098 podStartE2EDuration="44.559009098s" podCreationTimestamp="2025-05-15 12:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:12:24.558601531 +0000 UTC m=+52.275147411" watchObservedRunningTime="2025-05-15 12:12:24.559009098 +0000 UTC m=+52.275554938" May 15 12:12:25.184709 systemd-networkd[1440]: calie29ba156624: Gained IPv6LL May 15 12:12:25.413142 containerd[1522]: time="2025-05-15T12:12:25.413093407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d\" id:\"9f6c077116161201102ab063f440906486e55cb3dd4888cff796e53082070ea0\" pid:4568 exit_status:1 exited_at:{seconds:1747311145 nanos:412245473}" May 15 12:12:25.574730 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:54250.service - OpenSSH per-connection server daemon (10.0.0.1:54250). May 15 12:12:25.655311 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 54250 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:25.658384 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:25.664318 systemd-logind[1503]: New session 14 of user core. May 15 12:12:25.675449 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 12:12:25.866184 sshd[4587]: Connection closed by 10.0.0.1 port 54250 May 15 12:12:25.866583 sshd-session[4581]: pam_unix(sshd:session): session closed for user core May 15 12:12:25.870783 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:54250.service: Deactivated successfully. May 15 12:12:25.874393 systemd[1]: session-14.scope: Deactivated successfully. May 15 12:12:25.876108 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. May 15 12:12:25.878304 systemd-logind[1503]: Removed session 14. May 15 12:12:26.124067 containerd[1522]: time="2025-05-15T12:12:26.123953587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:26.124453 containerd[1522]: time="2025-05-15T12:12:26.124375235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 12:12:26.125326 containerd[1522]: time="2025-05-15T12:12:26.125285406Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:26.127290 containerd[1522]: time="2025-05-15T12:12:26.127253377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:26.128174 containerd[1522]: time="2025-05-15T12:12:26.128133631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 6.171832542s" May 15 12:12:26.128174 containerd[1522]: time="2025-05-15T12:12:26.128169788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 12:12:26.132676 containerd[1522]: time="2025-05-15T12:12:26.132520899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:12:26.134537 containerd[1522]: time="2025-05-15T12:12:26.134512868Z" level=info msg="CreateContainer within sandbox \"42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:12:26.141585 containerd[1522]: time="2025-05-15T12:12:26.141540777Z" level=info msg="Container 54fabe52a559c648d29e25658c88d745af7d4da0b9119b2cfd4864c5bc0329f5: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:26.147346 containerd[1522]: time="2025-05-15T12:12:26.147293942Z" level=info msg="CreateContainer within sandbox \"42cf4621a34f0ee58e55f8b9911bc42523e1fe38953d832908671e99e8fce58b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"54fabe52a559c648d29e25658c88d745af7d4da0b9119b2cfd4864c5bc0329f5\"" May 15 12:12:26.147784 containerd[1522]: time="2025-05-15T12:12:26.147760227Z" level=info msg="StartContainer for \"54fabe52a559c648d29e25658c88d745af7d4da0b9119b2cfd4864c5bc0329f5\"" May 15 12:12:26.149454 containerd[1522]: time="2025-05-15T12:12:26.149221276Z" level=info msg="connecting to shim 54fabe52a559c648d29e25658c88d745af7d4da0b9119b2cfd4864c5bc0329f5" address="unix:///run/containerd/s/af6c48abe2cb0fb4785e4f96651b99107fd0e1e113f62ae446b88e87300100fc" protocol=ttrpc version=3 May 15 12:12:26.171360 systemd[1]: Started cri-containerd-54fabe52a559c648d29e25658c88d745af7d4da0b9119b2cfd4864c5bc0329f5.scope - libcontainer container 54fabe52a559c648d29e25658c88d745af7d4da0b9119b2cfd4864c5bc0329f5. May 15 12:12:26.210989 containerd[1522]: time="2025-05-15T12:12:26.210893453Z" level=info msg="StartContainer for \"54fabe52a559c648d29e25658c88d745af7d4da0b9119b2cfd4864c5bc0329f5\" returns successfully" May 15 12:12:26.562293 kubelet[2625]: I0515 12:12:26.561981 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c7b9b4746-mr7dx" podStartSLOduration=34.385492646 podStartE2EDuration="40.56196403s" podCreationTimestamp="2025-05-15 12:11:46 +0000 UTC" firstStartedPulling="2025-05-15 12:12:19.955879888 +0000 UTC m=+47.672425728" lastFinishedPulling="2025-05-15 12:12:26.132351232 +0000 UTC m=+53.848897112" observedRunningTime="2025-05-15 12:12:26.561531023 +0000 UTC m=+54.278076903" watchObservedRunningTime="2025-05-15 12:12:26.56196403 +0000 UTC m=+54.278509910" May 15 12:12:26.866282 containerd[1522]: time="2025-05-15T12:12:26.866236545Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:26.867477 containerd[1522]: time="2025-05-15T12:12:26.866957930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 12:12:26.868836 containerd[1522]: time="2025-05-15T12:12:26.868792872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 736.239415ms" May 15 12:12:26.868836 containerd[1522]: time="2025-05-15T12:12:26.868832269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 12:12:26.870494 containerd[1522]: time="2025-05-15T12:12:26.870467345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 12:12:26.871446 containerd[1522]: time="2025-05-15T12:12:26.871389715Z" level=info msg="CreateContainer within sandbox \"276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:12:26.877217 containerd[1522]: time="2025-05-15T12:12:26.877101444Z" level=info msg="Container 0277b61cc467e1e9346f31ab524588d32bfab5584b4b765b60b2ec61c4753657: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:26.884443 containerd[1522]: time="2025-05-15T12:12:26.884401772Z" level=info msg="CreateContainer within sandbox \"276e597a60b172e0997c567a9fd4a4428f9d72f7163a9c09010e4ba46a33fec8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0277b61cc467e1e9346f31ab524588d32bfab5584b4b765b60b2ec61c4753657\"" May 15 12:12:26.885877 containerd[1522]: time="2025-05-15T12:12:26.885844423Z" level=info msg="StartContainer for \"0277b61cc467e1e9346f31ab524588d32bfab5584b4b765b60b2ec61c4753657\"" May 15 12:12:26.887461 containerd[1522]: time="2025-05-15T12:12:26.887416144Z" level=info msg="connecting to shim 0277b61cc467e1e9346f31ab524588d32bfab5584b4b765b60b2ec61c4753657" address="unix:///run/containerd/s/32f833b4666d65885aea2608115edc6000dead23b53083218b452e06ee51db39" protocol=ttrpc version=3 May 15 12:12:26.904346 systemd[1]: Started cri-containerd-0277b61cc467e1e9346f31ab524588d32bfab5584b4b765b60b2ec61c4753657.scope - libcontainer container 0277b61cc467e1e9346f31ab524588d32bfab5584b4b765b60b2ec61c4753657. May 15 12:12:26.962952 containerd[1522]: time="2025-05-15T12:12:26.962811803Z" level=info msg="StartContainer for \"0277b61cc467e1e9346f31ab524588d32bfab5584b4b765b60b2ec61c4753657\" returns successfully" May 15 12:12:27.743983 kubelet[2625]: I0515 12:12:27.743884 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c7b9b4746-x7jtk" podStartSLOduration=34.830102786 podStartE2EDuration="41.743869465s" podCreationTimestamp="2025-05-15 12:11:46 +0000 UTC" firstStartedPulling="2025-05-15 12:12:19.955885128 +0000 UTC m=+47.672431008" lastFinishedPulling="2025-05-15 12:12:26.869651847 +0000 UTC m=+54.586197687" observedRunningTime="2025-05-15 12:12:27.567084573 +0000 UTC m=+55.283630453" watchObservedRunningTime="2025-05-15 12:12:27.743869465 +0000 UTC m=+55.460415345" May 15 12:12:28.560764 kubelet[2625]: I0515 12:12:28.560730 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:12:29.858616 containerd[1522]: time="2025-05-15T12:12:29.858548770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:29.859106 containerd[1522]: time="2025-05-15T12:12:29.859073014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 12:12:29.860356 containerd[1522]: time="2025-05-15T12:12:29.860295810Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:29.862135 containerd[1522]: time="2025-05-15T12:12:29.862075448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:29.862953 containerd[1522]: time="2025-05-15T12:12:29.862780119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 2.992284816s" May 15 12:12:29.862953 containerd[1522]: time="2025-05-15T12:12:29.862830756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 12:12:29.864009 containerd[1522]: time="2025-05-15T12:12:29.863954679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 12:12:29.866371 containerd[1522]: time="2025-05-15T12:12:29.866299837Z" level=info msg="CreateContainer within sandbox \"265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 12:12:29.879230 containerd[1522]: time="2025-05-15T12:12:29.878393926Z" level=info msg="Container 6f61329093f10a7a53bc8bb706dbbe68c864cd244dc183f0711d0f93a6f513e7: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:29.887716 containerd[1522]: time="2025-05-15T12:12:29.887659209Z" level=info msg="CreateContainer within sandbox \"265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6f61329093f10a7a53bc8bb706dbbe68c864cd244dc183f0711d0f93a6f513e7\"" May 15 12:12:29.888555 containerd[1522]: time="2025-05-15T12:12:29.888519950Z" level=info msg="StartContainer for \"6f61329093f10a7a53bc8bb706dbbe68c864cd244dc183f0711d0f93a6f513e7\"" May 15 12:12:29.890463 containerd[1522]: time="2025-05-15T12:12:29.890410620Z" level=info msg="connecting to shim 6f61329093f10a7a53bc8bb706dbbe68c864cd244dc183f0711d0f93a6f513e7" address="unix:///run/containerd/s/29d897290df11f4b449469e831be66e62e02b6c2d892419c22a4cbfc65bb4b21" protocol=ttrpc version=3 May 15 12:12:29.920438 systemd[1]: Started cri-containerd-6f61329093f10a7a53bc8bb706dbbe68c864cd244dc183f0711d0f93a6f513e7.scope - libcontainer container 6f61329093f10a7a53bc8bb706dbbe68c864cd244dc183f0711d0f93a6f513e7. May 15 12:12:29.965589 containerd[1522]: time="2025-05-15T12:12:29.965544536Z" level=info msg="StartContainer for \"6f61329093f10a7a53bc8bb706dbbe68c864cd244dc183f0711d0f93a6f513e7\" returns successfully" May 15 12:12:30.882238 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:54260.service - OpenSSH per-connection server daemon (10.0.0.1:54260). May 15 12:12:30.940162 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 54260 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:30.941827 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:30.945579 systemd-logind[1503]: New session 15 of user core. May 15 12:12:30.954399 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 12:12:31.095380 sshd[4724]: Connection closed by 10.0.0.1 port 54260 May 15 12:12:31.095726 sshd-session[4722]: pam_unix(sshd:session): session closed for user core May 15 12:12:31.099459 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:54260.service: Deactivated successfully. May 15 12:12:31.102174 systemd[1]: session-15.scope: Deactivated successfully. May 15 12:12:31.105587 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. May 15 12:12:31.107303 systemd-logind[1503]: Removed session 15. May 15 12:12:32.879843 containerd[1522]: time="2025-05-15T12:12:32.879713967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:32.880354 containerd[1522]: time="2025-05-15T12:12:32.880305810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 12:12:32.881401 containerd[1522]: time="2025-05-15T12:12:32.881359424Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:32.883796 containerd[1522]: time="2025-05-15T12:12:32.883750075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:32.884447 containerd[1522]: time="2025-05-15T12:12:32.884405554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.020423158s" May 15 12:12:32.884495 containerd[1522]: time="2025-05-15T12:12:32.884446671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 12:12:32.885813 containerd[1522]: time="2025-05-15T12:12:32.885337256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 12:12:32.894029 containerd[1522]: time="2025-05-15T12:12:32.893977356Z" level=info msg="CreateContainer within sandbox \"91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 12:12:32.906231 containerd[1522]: time="2025-05-15T12:12:32.906026003Z" level=info msg="Container c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:32.916656 containerd[1522]: time="2025-05-15T12:12:32.916599982Z" level=info msg="CreateContainer within sandbox \"91398e5ed10097a1023936ffe811986afa63a8b055f3a05bd1fcf0a631489f44\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa\"" May 15 12:12:32.917148 containerd[1522]: time="2025-05-15T12:12:32.917122869Z" level=info msg="StartContainer for \"c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa\"" May 15 12:12:32.920002 containerd[1522]: time="2025-05-15T12:12:32.919964252Z" level=info msg="connecting to shim c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa" address="unix:///run/containerd/s/0582565ef9ef69cb2a54a60ca6dcf59b80a52fda627daea9bf7fa08370a6a01c" protocol=ttrpc version=3 May 15 12:12:32.943437 systemd[1]: Started cri-containerd-c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa.scope - libcontainer container c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa. May 15 12:12:32.985139 containerd[1522]: time="2025-05-15T12:12:32.985084102Z" level=info msg="StartContainer for \"c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa\" returns successfully" May 15 12:12:33.595390 kubelet[2625]: I0515 12:12:33.595322 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6588b76679-g7dkc" podStartSLOduration=35.542466788 podStartE2EDuration="46.595300809s" podCreationTimestamp="2025-05-15 12:11:47 +0000 UTC" firstStartedPulling="2025-05-15 12:12:21.832376443 +0000 UTC m=+49.548922323" lastFinishedPulling="2025-05-15 12:12:32.885210464 +0000 UTC m=+60.601756344" observedRunningTime="2025-05-15 12:12:33.593709825 +0000 UTC m=+61.310255705" watchObservedRunningTime="2025-05-15 12:12:33.595300809 +0000 UTC m=+61.311846769" May 15 12:12:33.668441 containerd[1522]: time="2025-05-15T12:12:33.668398424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa\" id:\"c6b806c32255eaf98cda0160673909730331ddb44bb5bfa4cbb2a4cb57e5c4dd\" pid:4798 exited_at:{seconds:1747311153 nanos:666806640}" May 15 12:12:34.527617 containerd[1522]: time="2025-05-15T12:12:34.527564728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:34.528518 containerd[1522]: time="2025-05-15T12:12:34.527979024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 12:12:34.529133 containerd[1522]: time="2025-05-15T12:12:34.529095079Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:34.531617 containerd[1522]: time="2025-05-15T12:12:34.531583813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:12:34.532255 containerd[1522]: time="2025-05-15T12:12:34.532143340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.646769966s" May 15 12:12:34.532255 containerd[1522]: time="2025-05-15T12:12:34.532218135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 12:12:34.535608 containerd[1522]: time="2025-05-15T12:12:34.535566339Z" level=info msg="CreateContainer within sandbox \"265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 12:12:34.544672 containerd[1522]: time="2025-05-15T12:12:34.544617048Z" level=info msg="Container 63550c85c32ae7cace0a277e0f1a9f3d765b3f738b9aaaa29fe7f2d3cf7ebfea: CDI devices from CRI Config.CDIDevices: []" May 15 12:12:34.554927 containerd[1522]: time="2025-05-15T12:12:34.554870887Z" level=info msg="CreateContainer within sandbox \"265017bfc753eac354082fd5ecd5284a832e8c49b1270aacc27a67e6ab8c8d44\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"63550c85c32ae7cace0a277e0f1a9f3d765b3f738b9aaaa29fe7f2d3cf7ebfea\"" May 15 12:12:34.555644 containerd[1522]: time="2025-05-15T12:12:34.555611363Z" level=info msg="StartContainer for \"63550c85c32ae7cace0a277e0f1a9f3d765b3f738b9aaaa29fe7f2d3cf7ebfea\"" May 15 12:12:34.557044 containerd[1522]: time="2025-05-15T12:12:34.557015121Z" level=info msg="connecting to shim 63550c85c32ae7cace0a277e0f1a9f3d765b3f738b9aaaa29fe7f2d3cf7ebfea" address="unix:///run/containerd/s/29d897290df11f4b449469e831be66e62e02b6c2d892419c22a4cbfc65bb4b21" protocol=ttrpc version=3 May 15 12:12:34.583427 systemd[1]: Started cri-containerd-63550c85c32ae7cace0a277e0f1a9f3d765b3f738b9aaaa29fe7f2d3cf7ebfea.scope - libcontainer container 63550c85c32ae7cace0a277e0f1a9f3d765b3f738b9aaaa29fe7f2d3cf7ebfea. May 15 12:12:34.639023 containerd[1522]: time="2025-05-15T12:12:34.638239318Z" level=info msg="StartContainer for \"63550c85c32ae7cace0a277e0f1a9f3d765b3f738b9aaaa29fe7f2d3cf7ebfea\" returns successfully" May 15 12:12:35.449519 kubelet[2625]: I0515 12:12:35.449465 2625 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 12:12:35.456525 kubelet[2625]: I0515 12:12:35.456479 2625 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 12:12:35.602464 kubelet[2625]: I0515 12:12:35.602375 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jl6hb" podStartSLOduration=35.676000231 podStartE2EDuration="48.602355078s" podCreationTimestamp="2025-05-15 12:11:47 +0000 UTC" firstStartedPulling="2025-05-15 12:12:21.606757236 +0000 UTC m=+49.323303116" lastFinishedPulling="2025-05-15 12:12:34.533112083 +0000 UTC m=+62.249657963" observedRunningTime="2025-05-15 12:12:35.60145241 +0000 UTC m=+63.317998290" watchObservedRunningTime="2025-05-15 12:12:35.602355078 +0000 UTC m=+63.318900918" May 15 12:12:36.109213 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:48990.service - OpenSSH per-connection server daemon (10.0.0.1:48990). May 15 12:12:36.171075 sshd[4846]: Accepted publickey for core from 10.0.0.1 port 48990 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:36.172724 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:36.177834 systemd-logind[1503]: New session 16 of user core. May 15 12:12:36.188416 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 12:12:36.370244 sshd[4848]: Connection closed by 10.0.0.1 port 48990 May 15 12:12:36.370827 sshd-session[4846]: pam_unix(sshd:session): session closed for user core May 15 12:12:36.374000 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:48990.service: Deactivated successfully. May 15 12:12:36.375993 systemd[1]: session-16.scope: Deactivated successfully. May 15 12:12:36.377683 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. May 15 12:12:36.381782 systemd-logind[1503]: Removed session 16. May 15 12:12:38.492731 containerd[1522]: time="2025-05-15T12:12:38.492675478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa\" id:\"90d97bc0c01d8821ec3fa176fefb4c73b3e52b0916301d6c8a79bc9ac691142e\" pid:4878 exited_at:{seconds:1747311158 nanos:492308577}" May 15 12:12:41.382541 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:48996.service - OpenSSH per-connection server daemon (10.0.0.1:48996). May 15 12:12:41.439269 sshd[4893]: Accepted publickey for core from 10.0.0.1 port 48996 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:41.440593 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:41.444481 systemd-logind[1503]: New session 17 of user core. May 15 12:12:41.455395 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 12:12:41.602267 sshd[4895]: Connection closed by 10.0.0.1 port 48996 May 15 12:12:41.603967 sshd-session[4893]: pam_unix(sshd:session): session closed for user core May 15 12:12:41.614788 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:48996.service: Deactivated successfully. May 15 12:12:41.616691 systemd[1]: session-17.scope: Deactivated successfully. May 15 12:12:41.618837 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. May 15 12:12:41.622002 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:48998.service - OpenSSH per-connection server daemon (10.0.0.1:48998). May 15 12:12:41.623995 systemd-logind[1503]: Removed session 17. May 15 12:12:41.675148 sshd[4909]: Accepted publickey for core from 10.0.0.1 port 48998 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:41.676612 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:41.681285 systemd-logind[1503]: New session 18 of user core. May 15 12:12:41.691381 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 12:12:41.918913 sshd[4911]: Connection closed by 10.0.0.1 port 48998 May 15 12:12:41.919519 sshd-session[4909]: pam_unix(sshd:session): session closed for user core May 15 12:12:41.932882 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:48998.service: Deactivated successfully. May 15 12:12:41.935399 systemd[1]: session-18.scope: Deactivated successfully. May 15 12:12:41.936414 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. May 15 12:12:41.939908 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:49014.service - OpenSSH per-connection server daemon (10.0.0.1:49014). May 15 12:12:41.941204 systemd-logind[1503]: Removed session 18. May 15 12:12:41.992843 sshd[4923]: Accepted publickey for core from 10.0.0.1 port 49014 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:41.994285 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:42.000368 systemd-logind[1503]: New session 19 of user core. May 15 12:12:42.006424 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 12:12:43.613128 sshd[4925]: Connection closed by 10.0.0.1 port 49014 May 15 12:12:43.613625 sshd-session[4923]: pam_unix(sshd:session): session closed for user core May 15 12:12:43.621683 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:49014.service: Deactivated successfully. May 15 12:12:43.624898 systemd[1]: session-19.scope: Deactivated successfully. May 15 12:12:43.625128 systemd[1]: session-19.scope: Consumed 544ms CPU time, 68.4M memory peak. May 15 12:12:43.627566 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. May 15 12:12:43.630149 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:39134.service - OpenSSH per-connection server daemon (10.0.0.1:39134). May 15 12:12:43.633806 systemd-logind[1503]: Removed session 19. May 15 12:12:43.704677 sshd[4948]: Accepted publickey for core from 10.0.0.1 port 39134 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:43.706290 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:43.711947 systemd-logind[1503]: New session 20 of user core. May 15 12:12:43.723388 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 12:12:44.042855 sshd[4950]: Connection closed by 10.0.0.1 port 39134 May 15 12:12:44.043581 sshd-session[4948]: pam_unix(sshd:session): session closed for user core May 15 12:12:44.055818 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:39134.service: Deactivated successfully. May 15 12:12:44.060010 systemd[1]: session-20.scope: Deactivated successfully. May 15 12:12:44.061934 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. May 15 12:12:44.065527 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:39140.service - OpenSSH per-connection server daemon (10.0.0.1:39140). May 15 12:12:44.067443 systemd-logind[1503]: Removed session 20. May 15 12:12:44.121427 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 39140 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:44.122912 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:44.128394 systemd-logind[1503]: New session 21 of user core. May 15 12:12:44.140415 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 12:12:44.278229 sshd[4964]: Connection closed by 10.0.0.1 port 39140 May 15 12:12:44.278854 sshd-session[4962]: pam_unix(sshd:session): session closed for user core May 15 12:12:44.283337 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:39140.service: Deactivated successfully. May 15 12:12:44.285473 systemd[1]: session-21.scope: Deactivated successfully. May 15 12:12:44.286419 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. May 15 12:12:44.288129 systemd-logind[1503]: Removed session 21. May 15 12:12:49.290502 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:39146.service - OpenSSH per-connection server daemon (10.0.0.1:39146). May 15 12:12:49.357778 sshd[4980]: Accepted publickey for core from 10.0.0.1 port 39146 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:49.359049 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:49.363260 systemd-logind[1503]: New session 22 of user core. May 15 12:12:49.377451 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 12:12:49.495737 sshd[4982]: Connection closed by 10.0.0.1 port 39146 May 15 12:12:49.496337 sshd-session[4980]: pam_unix(sshd:session): session closed for user core May 15 12:12:49.500492 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. May 15 12:12:49.500856 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:39146.service: Deactivated successfully. May 15 12:12:49.503840 systemd[1]: session-22.scope: Deactivated successfully. May 15 12:12:49.506749 systemd-logind[1503]: Removed session 22. May 15 12:12:54.507633 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:46386.service - OpenSSH per-connection server daemon (10.0.0.1:46386). May 15 12:12:54.549928 sshd[4999]: Accepted publickey for core from 10.0.0.1 port 46386 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:54.551333 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:54.555036 systemd-logind[1503]: New session 23 of user core. May 15 12:12:54.566364 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 12:12:54.692217 sshd[5001]: Connection closed by 10.0.0.1 port 46386 May 15 12:12:54.692427 sshd-session[4999]: pam_unix(sshd:session): session closed for user core May 15 12:12:54.696099 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:46386.service: Deactivated successfully. May 15 12:12:54.697929 systemd[1]: session-23.scope: Deactivated successfully. May 15 12:12:54.698859 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. May 15 12:12:54.700212 systemd-logind[1503]: Removed session 23. May 15 12:12:55.402403 containerd[1522]: time="2025-05-15T12:12:55.402365749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61bdd506cce9846db90e40468d30f1d9da354dade607ac5f200c09f797a8909d\" id:\"552716639ff6fdfa39f9215b5b71a029831443e914908983183ed1c0d842860b\" pid:5024 exited_at:{seconds:1747311175 nanos:402015839}" May 15 12:12:57.466196 containerd[1522]: time="2025-05-15T12:12:57.466120735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5216348aefbb67c536801b3ec00b14916fb7c5dc53451bf2de3540dd2aaedaa\" id:\"b06f8b4f153757265b7ba43c82d8fdafd4bb12b15fcd593f5fa1223c597e548e\" pid:5055 exited_at:{seconds:1747311177 nanos:465826744}" May 15 12:12:59.703764 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:46394.service - OpenSSH per-connection server daemon (10.0.0.1:46394). May 15 12:12:59.767913 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 46394 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:12:59.769623 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:12:59.774752 systemd-logind[1503]: New session 24 of user core. May 15 12:12:59.792420 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 12:12:59.996488 sshd[5068]: Connection closed by 10.0.0.1 port 46394 May 15 12:12:59.996833 sshd-session[5066]: pam_unix(sshd:session): session closed for user core May 15 12:12:59.999776 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:46394.service: Deactivated successfully. May 15 12:13:00.001813 systemd[1]: session-24.scope: Deactivated successfully. May 15 12:13:00.003479 systemd-logind[1503]: Session 24 logged out. Waiting for processes to exit. May 15 12:13:00.005636 systemd-logind[1503]: Removed session 24. May 15 12:13:02.429777 kubelet[2625]: I0515 12:13:02.429734 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:13:05.011764 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:60016.service - OpenSSH per-connection server daemon (10.0.0.1:60016). May 15 12:13:05.072304 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 60016 ssh2: RSA SHA256:Z/MOy8UKtI921msWtjhY7nphpTNSYs7FwiJLLfsk6vM May 15 12:13:05.073748 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:13:05.077559 systemd-logind[1503]: New session 25 of user core. May 15 12:13:05.089372 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 12:13:05.340573 sshd[5087]: Connection closed by 10.0.0.1 port 60016 May 15 12:13:05.340910 sshd-session[5085]: pam_unix(sshd:session): session closed for user core May 15 12:13:05.345779 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:60016.service: Deactivated successfully. May 15 12:13:05.349976 systemd[1]: session-25.scope: Deactivated successfully. May 15 12:13:05.351146 systemd-logind[1503]: Session 25 logged out. Waiting for processes to exit. May 15 12:13:05.353690 systemd-logind[1503]: Removed session 25.