May 13 23:46:48.979510 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:46:48.979533 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:46:48.979543 kernel: KASLR enabled May 13 23:46:48.979549 kernel: efi: EFI v2.7 by EDK II May 13 23:46:48.979554 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 13 23:46:48.979559 kernel: random: crng init done May 13 23:46:48.979566 kernel: secureboot: Secure boot disabled May 13 23:46:48.979572 kernel: ACPI: Early table checksum verification disabled May 13 23:46:48.979577 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 13 23:46:48.979584 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:46:48.979590 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979596 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979601 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979607 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979614 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979622 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979628 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979634 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979640 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:46:48.979657 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:46:48.979663 kernel: NUMA: Failed to initialise from firmware May 13 23:46:48.979674 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:46:48.979680 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 13 23:46:48.979686 kernel: Zone ranges: May 13 23:46:48.979691 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:46:48.979699 kernel: DMA32 empty May 13 23:46:48.979705 kernel: Normal empty May 13 23:46:48.979710 kernel: Movable zone start for each node May 13 23:46:48.979716 kernel: Early memory node ranges May 13 23:46:48.979722 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 13 23:46:48.979728 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 13 23:46:48.979734 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 13 23:46:48.979740 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 23:46:48.979746 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 23:46:48.979752 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 23:46:48.979758 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 23:46:48.979763 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 23:46:48.979771 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:46:48.979777 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:46:48.979783 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:46:48.979792 kernel: psci: probing for conduit method from ACPI. May 13 23:46:48.979798 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:46:48.979804 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:46:48.979812 kernel: psci: Trusted OS migration not required May 13 23:46:48.979819 kernel: psci: SMC Calling Convention v1.1 May 13 23:46:48.979825 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:46:48.979831 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:46:48.979838 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:46:48.979844 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:46:48.979851 kernel: Detected PIPT I-cache on CPU0 May 13 23:46:48.979857 kernel: CPU features: detected: GIC system register CPU interface May 13 23:46:48.979863 kernel: CPU features: detected: Hardware dirty bit management May 13 23:46:48.979869 kernel: CPU features: detected: Spectre-v4 May 13 23:46:48.979877 kernel: CPU features: detected: Spectre-BHB May 13 23:46:48.979884 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:46:48.979890 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:46:48.979899 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:46:48.979920 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:46:48.979926 kernel: alternatives: applying boot alternatives May 13 23:46:48.979933 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:46:48.979940 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:46:48.979947 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:46:48.979953 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:46:48.979959 kernel: Fallback order for Node 0: 0 May 13 23:46:48.979968 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:46:48.979974 kernel: Policy zone: DMA May 13 23:46:48.979980 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:46:48.979987 kernel: software IO TLB: area num 4. May 13 23:46:48.979993 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 23:46:48.980000 kernel: Memory: 2387344K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 184944K reserved, 0K cma-reserved) May 13 23:46:48.980007 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:46:48.980013 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:46:48.980021 kernel: rcu: RCU event tracing is enabled. May 13 23:46:48.980027 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:46:48.980034 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:46:48.980041 kernel: Tracing variant of Tasks RCU enabled. May 13 23:46:48.980049 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:46:48.980056 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:46:48.980062 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:46:48.980069 kernel: GICv3: 256 SPIs implemented May 13 23:46:48.980075 kernel: GICv3: 0 Extended SPIs implemented May 13 23:46:48.980081 kernel: Root IRQ handler: gic_handle_irq May 13 23:46:48.980088 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:46:48.980095 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:46:48.980101 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:46:48.980108 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:46:48.980115 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:46:48.980122 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:46:48.980129 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:46:48.980136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:46:48.980142 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:46:48.980149 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:46:48.980163 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:46:48.980170 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:46:48.980176 kernel: arm-pv: using stolen time PV May 13 23:46:48.980183 kernel: Console: colour dummy device 80x25 May 13 23:46:48.980190 kernel: ACPI: Core revision 20230628 May 13 23:46:48.980197 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:46:48.980206 kernel: pid_max: default: 32768 minimum: 301 May 13 23:46:48.980213 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:46:48.980219 kernel: landlock: Up and running. May 13 23:46:48.980226 kernel: SELinux: Initializing. May 13 23:46:48.980233 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:46:48.980239 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:46:48.980246 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:46:48.980253 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:46:48.980261 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:46:48.980269 kernel: rcu: Hierarchical SRCU implementation. May 13 23:46:48.980276 kernel: rcu: Max phase no-delay instances is 400. May 13 23:46:48.980283 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:46:48.980289 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:46:48.980296 kernel: Remapping and enabling EFI services. May 13 23:46:48.980306 kernel: smp: Bringing up secondary CPUs ... May 13 23:46:48.980322 kernel: Detected PIPT I-cache on CPU1 May 13 23:46:48.980329 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:46:48.980335 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:46:48.980344 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:46:48.980351 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:46:48.980363 kernel: Detected PIPT I-cache on CPU2 May 13 23:46:48.980372 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:46:48.980380 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:46:48.980387 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:46:48.980394 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:46:48.980409 kernel: Detected PIPT I-cache on CPU3 May 13 23:46:48.980417 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:46:48.980424 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:46:48.980434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:46:48.980441 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:46:48.980448 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:46:48.980455 kernel: SMP: Total of 4 processors activated. May 13 23:46:48.980462 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:46:48.980469 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:46:48.980476 kernel: CPU features: detected: Common not Private translations May 13 23:46:48.980484 kernel: CPU features: detected: CRC32 instructions May 13 23:46:48.980492 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:46:48.980499 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:46:48.980506 kernel: CPU features: detected: LSE atomic instructions May 13 23:46:48.980513 kernel: CPU features: detected: Privileged Access Never May 13 23:46:48.980520 kernel: CPU features: detected: RAS Extension Support May 13 23:46:48.980527 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:46:48.980535 kernel: CPU: All CPU(s) started at EL1 May 13 23:46:48.980542 kernel: alternatives: applying system-wide alternatives May 13 23:46:48.980550 kernel: devtmpfs: initialized May 13 23:46:48.980557 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:46:48.980565 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:46:48.980572 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:46:48.980579 kernel: SMBIOS 3.0.0 present. May 13 23:46:48.980586 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:46:48.980593 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:46:48.980600 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:46:48.980607 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:46:48.980615 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:46:48.980623 kernel: audit: initializing netlink subsys (disabled) May 13 23:46:48.980630 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 May 13 23:46:48.980637 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:46:48.980644 kernel: cpuidle: using governor menu May 13 23:46:48.980651 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:46:48.980658 kernel: ASID allocator initialised with 32768 entries May 13 23:46:48.980665 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:46:48.980673 kernel: Serial: AMBA PL011 UART driver May 13 23:46:48.980682 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:46:48.980689 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:46:48.980696 kernel: Modules: 509232 pages in range for PLT usage May 13 23:46:48.980703 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:46:48.980711 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:46:48.980718 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:46:48.980726 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:46:48.980733 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:46:48.980742 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:46:48.980752 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:46:48.980761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:46:48.980768 kernel: ACPI: Added _OSI(Module Device) May 13 23:46:48.980775 kernel: ACPI: Added _OSI(Processor Device) May 13 23:46:48.980782 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:46:48.980789 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:46:48.980796 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:46:48.980803 kernel: ACPI: Interpreter enabled May 13 23:46:48.980810 kernel: ACPI: Using GIC for interrupt routing May 13 23:46:48.980817 kernel: ACPI: MCFG table detected, 1 entries May 13 23:46:48.980826 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:46:48.980834 kernel: printk: console [ttyAMA0] enabled May 13 23:46:48.980841 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:46:48.981000 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:46:48.981081 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:46:48.981191 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:46:48.981263 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:46:48.981333 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:46:48.981342 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:46:48.981349 kernel: PCI host bridge to bus 0000:00 May 13 23:46:48.981438 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:46:48.981500 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:46:48.981560 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:46:48.981622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:46:48.981714 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:46:48.981798 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:46:48.981868 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:46:48.981939 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:46:48.982007 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:46:48.982078 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:46:48.982145 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:46:48.982228 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:46:48.982291 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:46:48.982354 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:46:48.982437 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:46:48.982448 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:46:48.982455 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:46:48.982463 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:46:48.982473 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:46:48.982480 kernel: iommu: Default domain type: Translated May 13 23:46:48.982487 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:46:48.982494 kernel: efivars: Registered efivars operations May 13 23:46:48.982501 kernel: vgaarb: loaded May 13 23:46:48.982508 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:46:48.982515 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:46:48.982522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:46:48.982529 kernel: pnp: PnP ACPI init May 13 23:46:48.982607 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:46:48.982617 kernel: pnp: PnP ACPI: found 1 devices May 13 23:46:48.982625 kernel: NET: Registered PF_INET protocol family May 13 23:46:48.982632 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:46:48.982639 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:46:48.982647 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:46:48.982654 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:46:48.982661 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:46:48.982671 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:46:48.982678 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:46:48.982686 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:46:48.982694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:46:48.982703 kernel: PCI: CLS 0 bytes, default 64 May 13 23:46:48.982711 kernel: kvm [1]: HYP mode not available May 13 23:46:48.982718 kernel: Initialise system trusted keyrings May 13 23:46:48.982726 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:46:48.982733 kernel: Key type asymmetric registered May 13 23:46:48.982742 kernel: Asymmetric key parser 'x509' registered May 13 23:46:48.982749 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:46:48.982756 kernel: io scheduler mq-deadline registered May 13 23:46:48.982763 kernel: io scheduler kyber registered May 13 23:46:48.982770 kernel: io scheduler bfq registered May 13 23:46:48.982777 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:46:48.982784 kernel: ACPI: button: Power Button [PWRB] May 13 23:46:48.982792 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:46:48.982863 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:46:48.982873 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:46:48.982882 kernel: thunder_xcv, ver 1.0 May 13 23:46:48.982889 kernel: thunder_bgx, ver 1.0 May 13 23:46:48.982896 kernel: nicpf, ver 1.0 May 13 23:46:48.982903 kernel: nicvf, ver 1.0 May 13 23:46:48.982983 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:46:48.983049 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:46:48 UTC (1747180008) May 13 23:46:48.983059 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:46:48.983066 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:46:48.983075 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:46:48.983082 kernel: watchdog: Hard watchdog permanently disabled May 13 23:46:48.983089 kernel: NET: Registered PF_INET6 protocol family May 13 23:46:48.983097 kernel: Segment Routing with IPv6 May 13 23:46:48.983104 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:46:48.983111 kernel: NET: Registered PF_PACKET protocol family May 13 23:46:48.983118 kernel: Key type dns_resolver registered May 13 23:46:48.983125 kernel: registered taskstats version 1 May 13 23:46:48.983132 kernel: Loading compiled-in X.509 certificates May 13 23:46:48.983141 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:46:48.983148 kernel: Key type .fscrypt registered May 13 23:46:48.983162 kernel: Key type fscrypt-provisioning registered May 13 23:46:48.983170 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:46:48.983177 kernel: ima: Allocated hash algorithm: sha1 May 13 23:46:48.983184 kernel: ima: No architecture policies found May 13 23:46:48.983192 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:46:48.983199 kernel: clk: Disabling unused clocks May 13 23:46:48.983208 kernel: Freeing unused kernel memory: 38464K May 13 23:46:48.983215 kernel: Run /init as init process May 13 23:46:48.983222 kernel: with arguments: May 13 23:46:48.983229 kernel: /init May 13 23:46:48.983236 kernel: with environment: May 13 23:46:48.983243 kernel: HOME=/ May 13 23:46:48.983250 kernel: TERM=linux May 13 23:46:48.983257 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:46:48.983265 systemd[1]: Successfully made /usr/ read-only. May 13 23:46:48.983277 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:46:48.983285 systemd[1]: Detected virtualization kvm. May 13 23:46:48.983293 systemd[1]: Detected architecture arm64. May 13 23:46:48.983300 systemd[1]: Running in initrd. May 13 23:46:48.983308 systemd[1]: No hostname configured, using default hostname. May 13 23:46:48.983315 systemd[1]: Hostname set to . May 13 23:46:48.983323 systemd[1]: Initializing machine ID from VM UUID. May 13 23:46:48.983332 systemd[1]: Queued start job for default target initrd.target. May 13 23:46:48.983340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:46:48.983348 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:46:48.983356 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:46:48.983365 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:46:48.983373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:46:48.983381 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:46:48.983392 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:46:48.983407 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:46:48.983416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:46:48.983424 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:46:48.983431 systemd[1]: Reached target paths.target - Path Units. May 13 23:46:48.983439 systemd[1]: Reached target slices.target - Slice Units. May 13 23:46:48.983447 systemd[1]: Reached target swap.target - Swaps. May 13 23:46:48.983455 systemd[1]: Reached target timers.target - Timer Units. May 13 23:46:48.983463 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:46:48.983474 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:46:48.983484 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:46:48.983494 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:46:48.983503 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:46:48.983511 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:46:48.983519 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:46:48.983526 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:46:48.983534 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:46:48.983544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:46:48.983552 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:46:48.983560 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:46:48.983568 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:46:48.983575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:46:48.983583 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:46:48.983591 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:46:48.983599 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:46:48.983610 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:46:48.983618 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:46:48.983647 systemd-journald[236]: Collecting audit messages is disabled. May 13 23:46:48.983668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:46:48.983677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:46:48.983686 systemd-journald[236]: Journal started May 13 23:46:48.983705 systemd-journald[236]: Runtime Journal (/run/log/journal/53ec176735254870962c9326467a9c18) is 5.9M, max 47.3M, 41.4M free. May 13 23:46:48.969311 systemd-modules-load[238]: Inserted module 'overlay' May 13 23:46:48.987432 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:46:48.987467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:46:48.990329 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:46:48.993698 kernel: Bridge firewalling registered May 13 23:46:48.992231 systemd-modules-load[238]: Inserted module 'br_netfilter' May 13 23:46:48.993248 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:46:48.997688 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:46:48.999396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:46:49.004550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:46:49.012810 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:46:49.014458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:46:49.016857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:46:49.018926 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:46:49.024633 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:46:49.027225 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:46:49.046309 dracut-cmdline[275]: dracut-dracut-053 May 13 23:46:49.048897 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:46:49.068081 systemd-resolved[276]: Positive Trust Anchors: May 13 23:46:49.068102 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:46:49.068133 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:46:49.073430 systemd-resolved[276]: Defaulting to hostname 'linux'. May 13 23:46:49.077039 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:46:49.079562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:46:49.123444 kernel: SCSI subsystem initialized May 13 23:46:49.129450 kernel: Loading iSCSI transport class v2.0-870. May 13 23:46:49.137461 kernel: iscsi: registered transport (tcp) May 13 23:46:49.152733 kernel: iscsi: registered transport (qla4xxx) May 13 23:46:49.152791 kernel: QLogic iSCSI HBA Driver May 13 23:46:49.199812 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:46:49.201881 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:46:49.234423 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:46:49.234467 kernel: device-mapper: uevent: version 1.0.3 May 13 23:46:49.236106 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:46:49.283453 kernel: raid6: neonx8 gen() 15791 MB/s May 13 23:46:49.300426 kernel: raid6: neonx4 gen() 15783 MB/s May 13 23:46:49.317426 kernel: raid6: neonx2 gen() 13182 MB/s May 13 23:46:49.334427 kernel: raid6: neonx1 gen() 10397 MB/s May 13 23:46:49.351425 kernel: raid6: int64x8 gen() 6754 MB/s May 13 23:46:49.368426 kernel: raid6: int64x4 gen() 7275 MB/s May 13 23:46:49.385437 kernel: raid6: int64x2 gen() 6021 MB/s May 13 23:46:49.402651 kernel: raid6: int64x1 gen() 5052 MB/s May 13 23:46:49.402674 kernel: raid6: using algorithm neonx8 gen() 15791 MB/s May 13 23:46:49.420628 kernel: raid6: .... xor() 11860 MB/s, rmw enabled May 13 23:46:49.420647 kernel: raid6: using neon recovery algorithm May 13 23:46:49.426858 kernel: xor: measuring software checksum speed May 13 23:46:49.426892 kernel: 8regs : 21630 MB/sec May 13 23:46:49.426902 kernel: 32regs : 20876 MB/sec May 13 23:46:49.427563 kernel: arm64_neon : 27719 MB/sec May 13 23:46:49.427575 kernel: xor: using function: arm64_neon (27719 MB/sec) May 13 23:46:49.484428 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:46:49.500435 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:46:49.503268 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:46:49.525695 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 13 23:46:49.529356 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:46:49.532369 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:46:49.560698 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 13 23:46:49.592331 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:46:49.594941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:46:49.654423 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:46:49.659506 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:46:49.687438 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:46:49.689286 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:46:49.690999 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:46:49.693591 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:46:49.696908 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:46:49.718974 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:46:49.723568 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:46:49.728211 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:46:49.727787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:46:49.727912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:46:49.730687 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:46:49.733034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:46:49.733221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:46:49.737988 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:46:49.741512 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:46:49.741537 kernel: GPT:9289727 != 19775487 May 13 23:46:49.741631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:46:49.746215 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:46:49.746236 kernel: GPT:9289727 != 19775487 May 13 23:46:49.746245 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:46:49.746253 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:46:49.765457 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (516) May 13 23:46:49.765511 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (515) May 13 23:46:49.768962 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:46:49.770539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:46:49.784373 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:46:49.799823 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:46:49.801065 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:46:49.810019 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:46:49.812126 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:46:49.814412 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:46:49.839554 disk-uuid[549]: Primary Header is updated. May 13 23:46:49.839554 disk-uuid[549]: Secondary Entries is updated. May 13 23:46:49.839554 disk-uuid[549]: Secondary Header is updated. May 13 23:46:49.849470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:46:49.850036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:46:49.854815 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:46:50.860460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:46:50.861329 disk-uuid[553]: The operation has completed successfully. May 13 23:46:50.906734 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:46:50.906846 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:46:50.932923 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:46:50.950775 sh[570]: Success May 13 23:46:50.973416 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:46:51.013726 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:46:51.016633 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:46:51.031154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:46:51.040550 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:46:51.040601 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:46:51.040621 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:46:51.040632 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:46:51.042092 kernel: BTRFS info (device dm-0): using free space tree May 13 23:46:51.048031 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:46:51.049617 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:46:51.053547 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:46:51.055269 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:46:51.084784 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:46:51.084841 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:46:51.084852 kernel: BTRFS info (device vda6): using free space tree May 13 23:46:51.090418 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:46:51.095439 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:46:51.101751 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:46:51.103872 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:46:51.197719 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:46:51.203092 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:46:51.254038 systemd-networkd[750]: lo: Link UP May 13 23:46:51.254051 systemd-networkd[750]: lo: Gained carrier May 13 23:46:51.254944 systemd-networkd[750]: Enumeration completed May 13 23:46:51.255066 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:46:51.255706 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:46:51.255710 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:46:51.256882 systemd[1]: Reached target network.target - Network. May 13 23:46:51.260445 systemd-networkd[750]: eth0: Link UP May 13 23:46:51.260449 systemd-networkd[750]: eth0: Gained carrier May 13 23:46:51.260457 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:46:51.279620 ignition[676]: Ignition 2.20.0 May 13 23:46:51.279632 ignition[676]: Stage: fetch-offline May 13 23:46:51.279667 ignition[676]: no configs at "/usr/lib/ignition/base.d" May 13 23:46:51.279676 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:46:51.282455 systemd-networkd[750]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:46:51.279868 ignition[676]: parsed url from cmdline: "" May 13 23:46:51.279871 ignition[676]: no config URL provided May 13 23:46:51.279876 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:46:51.279884 ignition[676]: no config at "/usr/lib/ignition/user.ign" May 13 23:46:51.279911 ignition[676]: op(1): [started] loading QEMU firmware config module May 13 23:46:51.279916 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:46:51.288868 ignition[676]: op(1): [finished] loading QEMU firmware config module May 13 23:46:51.328643 ignition[676]: parsing config with SHA512: ede42521093887397cf2207624bcf991defe5af8a7e0b761107dd983bdfcfae69da272a931f34dd3211202fa488dd73d67886b45a11a6fdfed004ed28e6d51cd May 13 23:46:51.337567 unknown[676]: fetched base config from "system" May 13 23:46:51.337581 unknown[676]: fetched user config from "qemu" May 13 23:46:51.337995 ignition[676]: fetch-offline: fetch-offline passed May 13 23:46:51.340188 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:46:51.338070 ignition[676]: Ignition finished successfully May 13 23:46:51.341643 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:46:51.343630 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:46:51.367756 ignition[766]: Ignition 2.20.0 May 13 23:46:51.367768 ignition[766]: Stage: kargs May 13 23:46:51.367942 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 13 23:46:51.367952 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:46:51.371848 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:46:51.368853 ignition[766]: kargs: kargs passed May 13 23:46:51.368897 ignition[766]: Ignition finished successfully May 13 23:46:51.374580 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:46:51.397850 ignition[774]: Ignition 2.20.0 May 13 23:46:51.397866 ignition[774]: Stage: disks May 13 23:46:51.398041 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 13 23:46:51.401145 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:46:51.398051 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:46:51.402969 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:46:51.399011 ignition[774]: disks: disks passed May 13 23:46:51.404723 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:46:51.399057 ignition[774]: Ignition finished successfully May 13 23:46:51.406870 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:46:51.408835 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:46:51.410477 systemd[1]: Reached target basic.target - Basic System. May 13 23:46:51.413550 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:46:51.440797 systemd-fsck[784]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:46:51.451837 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:46:51.454151 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:46:51.518427 kernel: EXT4-fs (vda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:46:51.518396 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:46:51.519783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:46:51.522310 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:46:51.524048 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:46:51.525101 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:46:51.525170 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:46:51.525203 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:46:51.539019 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:46:51.541193 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:46:51.546501 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (793) May 13 23:46:51.549177 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:46:51.549224 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:46:51.550007 kernel: BTRFS info (device vda6): using free space tree May 13 23:46:51.552415 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:46:51.554498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:46:51.608110 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:46:51.612899 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory May 13 23:46:51.617051 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:46:51.621207 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:46:51.731209 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:46:51.733379 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:46:51.735606 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:46:51.753479 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:46:51.777622 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:46:51.788790 ignition[908]: INFO : Ignition 2.20.0 May 13 23:46:51.788790 ignition[908]: INFO : Stage: mount May 13 23:46:51.790535 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:46:51.790535 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:46:51.790535 ignition[908]: INFO : mount: mount passed May 13 23:46:51.790535 ignition[908]: INFO : Ignition finished successfully May 13 23:46:51.794442 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:46:51.796344 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:46:52.038288 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:46:52.039813 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:46:52.059259 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (921) May 13 23:46:52.059313 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:46:52.059324 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:46:52.061018 kernel: BTRFS info (device vda6): using free space tree May 13 23:46:52.063435 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:46:52.065239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:46:52.090298 ignition[938]: INFO : Ignition 2.20.0 May 13 23:46:52.090298 ignition[938]: INFO : Stage: files May 13 23:46:52.092099 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:46:52.092099 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:46:52.092099 ignition[938]: DEBUG : files: compiled without relabeling support, skipping May 13 23:46:52.095650 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:46:52.095650 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:46:52.098574 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:46:52.098574 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:46:52.098574 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:46:52.097720 unknown[938]: wrote ssh authorized keys file for user: core May 13 23:46:52.104240 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 23:46:52.104240 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 23:46:52.161541 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:46:52.313735 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 23:46:52.313735 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:46:52.318161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 23:46:52.642478 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:46:52.660896 systemd-networkd[750]: eth0: Gained IPv6LL May 13 23:46:53.052871 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:46:53.052871 ignition[938]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:46:53.057042 ignition[938]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:46:53.057042 ignition[938]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:46:53.057042 ignition[938]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:46:53.057042 ignition[938]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 23:46:53.057042 ignition[938]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:46:53.057042 ignition[938]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:46:53.057042 ignition[938]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 23:46:53.057042 ignition[938]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:46:53.077430 ignition[938]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:46:53.081196 ignition[938]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:46:53.083013 ignition[938]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:46:53.083013 ignition[938]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 23:46:53.083013 ignition[938]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:46:53.083013 ignition[938]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:46:53.083013 ignition[938]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:46:53.083013 ignition[938]: INFO : files: files passed May 13 23:46:53.083013 ignition[938]: INFO : Ignition finished successfully May 13 23:46:53.083349 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:46:53.086736 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:46:53.091551 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:46:53.110148 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:46:53.110264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:46:53.114493 initrd-setup-root-after-ignition[966]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:46:53.116240 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:46:53.116240 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:46:53.122051 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:46:53.116898 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:46:53.119503 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:46:53.121658 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:46:53.173796 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:46:53.175035 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:46:53.176664 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:46:53.178782 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:46:53.180724 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:46:53.181624 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:46:53.208943 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:46:53.211557 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:46:53.234903 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:46:53.240500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:46:53.241875 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:46:53.247265 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:46:53.247443 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:46:53.254081 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:46:53.255247 systemd[1]: Stopped target basic.target - Basic System. May 13 23:46:53.258722 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:46:53.260997 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:46:53.263350 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:46:53.265349 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:46:53.267461 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:46:53.269710 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:46:53.271649 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:46:53.273558 systemd[1]: Stopped target swap.target - Swaps. May 13 23:46:53.275569 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:46:53.275706 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:46:53.278552 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:46:53.280472 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:46:53.282545 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:46:53.284493 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:46:53.285947 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:46:53.286081 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:46:53.289221 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:46:53.289415 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:46:53.292040 systemd[1]: Stopped target paths.target - Path Units. May 13 23:46:53.293633 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:46:53.294456 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:46:53.295973 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:46:53.297923 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:46:53.300420 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:46:53.300610 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:46:53.302327 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:46:53.302489 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:46:53.304494 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:46:53.304668 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:46:53.307309 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:46:53.307485 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:46:53.310068 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:46:53.313467 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:46:53.315179 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:46:53.315386 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:46:53.317575 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:46:53.317738 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:46:53.334477 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:46:53.334640 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:46:53.352144 ignition[993]: INFO : Ignition 2.20.0 May 13 23:46:53.352144 ignition[993]: INFO : Stage: umount May 13 23:46:53.352144 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:46:53.352144 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:46:53.356797 ignition[993]: INFO : umount: umount passed May 13 23:46:53.356797 ignition[993]: INFO : Ignition finished successfully May 13 23:46:53.357572 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:46:53.357695 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:46:53.359148 systemd[1]: Stopped target network.target - Network. May 13 23:46:53.360761 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:46:53.360840 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:46:53.366791 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:46:53.366877 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:46:53.368842 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:46:53.368894 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:46:53.370692 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:46:53.370740 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:46:53.372878 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:46:53.378422 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:46:53.381102 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:46:53.386029 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:46:53.386136 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:46:53.390241 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:46:53.390522 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:46:53.390753 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:46:53.394306 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:46:53.395783 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:46:53.395836 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:46:53.398214 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:46:53.399305 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:46:53.399378 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:46:53.401948 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:46:53.402002 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:46:53.404962 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:46:53.405017 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:46:53.407206 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:46:53.407265 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:46:53.410731 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:46:53.415939 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:46:53.416012 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:46:53.425336 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:46:53.425489 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:46:53.432811 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:46:53.432951 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:46:53.435850 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:46:53.435977 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:46:53.438355 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:46:53.438542 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:46:53.440797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:46:53.440838 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:46:53.442659 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:46:53.442726 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:46:53.445480 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:46:53.445539 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:46:53.448583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:46:53.448642 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:46:53.451706 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:46:53.451767 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:46:53.454547 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:46:53.456809 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:46:53.456883 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:46:53.460043 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:46:53.460102 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:46:53.462636 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:46:53.462697 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:46:53.464869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:46:53.464930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:46:53.469329 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:46:53.469412 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:46:53.472286 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:46:53.472425 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:46:53.474127 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:46:53.476912 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:46:53.497479 systemd[1]: Switching root. May 13 23:46:53.526859 systemd-journald[236]: Journal stopped May 13 23:46:54.470778 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 13 23:46:54.470831 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:46:54.470843 kernel: SELinux: policy capability open_perms=1 May 13 23:46:54.470853 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:46:54.470952 kernel: SELinux: policy capability always_check_network=0 May 13 23:46:54.470968 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:46:54.470979 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:46:54.470990 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:46:54.471001 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:46:54.471021 kernel: audit: type=1403 audit(1747180013.698:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:46:54.471036 systemd[1]: Successfully loaded SELinux policy in 40.548ms. May 13 23:46:54.471055 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.183ms. May 13 23:46:54.471077 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:46:54.471101 systemd[1]: Detected virtualization kvm. May 13 23:46:54.471113 systemd[1]: Detected architecture arm64. May 13 23:46:54.471123 systemd[1]: Detected first boot. May 13 23:46:54.471134 systemd[1]: Initializing machine ID from VM UUID. May 13 23:46:54.471144 zram_generator::config[1040]: No configuration found. May 13 23:46:54.471156 kernel: NET: Registered PF_VSOCK protocol family May 13 23:46:54.471166 systemd[1]: Populated /etc with preset unit settings. May 13 23:46:54.471177 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:46:54.471189 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:46:54.471200 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:46:54.471211 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:46:54.471221 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:46:54.471232 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:46:54.471243 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:46:54.471257 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:46:54.471268 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:46:54.471279 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:46:54.471292 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:46:54.471302 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:46:54.471313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:46:54.471324 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:46:54.471336 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:46:54.471346 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:46:54.471357 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:46:54.471367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:46:54.471446 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:46:54.471470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:46:54.471483 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:46:54.471493 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:46:54.471504 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:46:54.471519 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:46:54.471529 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:46:54.471540 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:46:54.471552 systemd[1]: Reached target slices.target - Slice Units. May 13 23:46:54.471563 systemd[1]: Reached target swap.target - Swaps. May 13 23:46:54.471573 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:46:54.471584 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:46:54.471594 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:46:54.471607 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:46:54.471619 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:46:54.471629 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:46:54.471641 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:46:54.471653 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:46:54.471665 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:46:54.471676 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:46:54.471686 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:46:54.471696 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:46:54.471707 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:46:54.471719 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:46:54.471735 systemd[1]: Reached target machines.target - Containers. May 13 23:46:54.471747 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:46:54.471760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:46:54.471771 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:46:54.471782 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:46:54.471793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:46:54.471810 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:46:54.471824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:46:54.471834 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:46:54.471845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:46:54.471859 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:46:54.471870 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:46:54.471881 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:46:54.471891 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:46:54.471902 kernel: fuse: init (API version 7.39) May 13 23:46:54.471912 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:46:54.471923 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:46:54.471933 kernel: ACPI: bus type drm_connector registered May 13 23:46:54.471943 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:46:54.471956 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:46:54.471966 kernel: loop: module loaded May 13 23:46:54.471978 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:46:54.471990 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:46:54.472001 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:46:54.472012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:46:54.472023 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:46:54.472037 systemd[1]: Stopped verity-setup.service. May 13 23:46:54.472054 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:46:54.472064 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:46:54.472130 systemd-journald[1111]: Collecting audit messages is disabled. May 13 23:46:54.472164 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:46:54.472178 systemd-journald[1111]: Journal started May 13 23:46:54.472201 systemd-journald[1111]: Runtime Journal (/run/log/journal/53ec176735254870962c9326467a9c18) is 5.9M, max 47.3M, 41.4M free. May 13 23:46:54.192418 systemd[1]: Queued start job for default target multi-user.target. May 13 23:46:54.207589 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:46:54.208027 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:46:54.474623 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:46:54.475327 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:46:54.476738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:46:54.478177 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:46:54.479559 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:46:54.481174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:46:54.482971 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:46:54.483157 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:46:54.484820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:46:54.485036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:46:54.487743 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:46:54.487938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:46:54.489577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:46:54.489753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:46:54.491341 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:46:54.491701 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:46:54.493186 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:46:54.493350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:46:54.494891 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:46:54.496634 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:46:54.498294 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:46:54.499912 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:46:54.515995 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:46:54.518956 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:46:54.521346 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:46:54.522575 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:46:54.522630 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:46:54.524826 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:46:54.528389 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:46:54.530769 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:46:54.531946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:46:54.534521 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:46:54.536702 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:46:54.549108 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:46:54.550306 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:46:54.551485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:46:54.552718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:46:54.557869 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:46:54.560418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:46:54.564431 systemd-journald[1111]: Time spent on flushing to /var/log/journal/53ec176735254870962c9326467a9c18 is 32.941ms for 872 entries. May 13 23:46:54.564431 systemd-journald[1111]: System Journal (/var/log/journal/53ec176735254870962c9326467a9c18) is 8M, max 195.6M, 187.6M free. May 13 23:46:54.622634 systemd-journald[1111]: Received client request to flush runtime journal. May 13 23:46:54.622687 kernel: loop0: detected capacity change from 0 to 103832 May 13 23:46:54.622702 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:46:54.563687 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:46:54.567871 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:46:54.569200 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:46:54.572445 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:46:54.586500 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:46:54.588555 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:46:54.593695 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:46:54.596588 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:46:54.612449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:46:54.614728 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 13 23:46:54.614739 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 13 23:46:54.618325 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:46:54.625567 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:46:54.628744 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:46:54.638595 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:46:54.649420 kernel: loop1: detected capacity change from 0 to 126448 May 13 23:46:54.671648 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:46:54.689491 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:46:54.692522 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:46:54.699426 kernel: loop2: detected capacity change from 0 to 201592 May 13 23:46:54.721223 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. May 13 23:46:54.721240 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. May 13 23:46:54.725983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:46:54.735745 kernel: loop3: detected capacity change from 0 to 103832 May 13 23:46:54.743682 kernel: loop4: detected capacity change from 0 to 126448 May 13 23:46:54.753427 kernel: loop5: detected capacity change from 0 to 201592 May 13 23:46:54.762587 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:46:54.763081 (sd-merge)[1186]: Merged extensions into '/usr'. May 13 23:46:54.767213 systemd[1]: Reload requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:46:54.767231 systemd[1]: Reloading... May 13 23:46:54.850439 zram_generator::config[1218]: No configuration found. May 13 23:46:54.928087 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:46:54.947216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:46:54.998559 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:46:54.998873 systemd[1]: Reloading finished in 231 ms. May 13 23:46:55.017206 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:46:55.020735 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:46:55.039801 systemd[1]: Starting ensure-sysext.service... May 13 23:46:55.041634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:46:55.053604 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... May 13 23:46:55.053623 systemd[1]: Reloading... May 13 23:46:55.060317 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:46:55.060548 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:46:55.061161 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:46:55.061370 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 13 23:46:55.061443 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 13 23:46:55.064195 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:46:55.064207 systemd-tmpfiles[1250]: Skipping /boot May 13 23:46:55.072822 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:46:55.072838 systemd-tmpfiles[1250]: Skipping /boot May 13 23:46:55.104445 zram_generator::config[1277]: No configuration found. May 13 23:46:55.192256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:46:55.244206 systemd[1]: Reloading finished in 190 ms. May 13 23:46:55.262489 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:46:55.268757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:46:55.279702 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:46:55.282365 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:46:55.300463 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:46:55.304207 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:46:55.307243 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:46:55.312094 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:46:55.318813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:46:55.321976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:46:55.325513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:46:55.350135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:46:55.351330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:46:55.351484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:46:55.353448 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:46:55.357460 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:46:55.359500 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:46:55.361210 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:46:55.361445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:46:55.363414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:46:55.365428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:46:55.367068 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:46:55.367237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:46:55.370586 systemd-udevd[1319]: Using default interface naming scheme 'v255'. May 13 23:46:55.378352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:46:55.380275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:46:55.386005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:46:55.389115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:46:55.390667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:46:55.390819 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:46:55.404725 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:46:55.406048 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:46:55.407088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:46:55.413878 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:46:55.415805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:46:55.415970 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:46:55.417615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:46:55.417767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:46:55.419614 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:46:55.419911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:46:55.435727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:46:55.437451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:46:55.444930 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:46:55.455854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:46:55.458541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:46:55.460183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:46:55.460233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:46:55.463220 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:46:55.464477 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:46:55.466395 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:46:55.466810 augenrules[1383]: No rules May 13 23:46:55.467982 systemd[1]: Finished ensure-sysext.service. May 13 23:46:55.470566 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:46:55.470757 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:46:55.473438 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:46:55.474734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:46:55.474899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:46:55.477711 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:46:55.477879 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:46:55.479434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:46:55.479588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:46:55.481050 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:46:55.481228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:46:55.492352 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:46:55.492645 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:46:55.492710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:46:55.494874 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:46:55.539543 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1355) May 13 23:46:55.546245 systemd-resolved[1318]: Positive Trust Anchors: May 13 23:46:55.548006 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:46:55.548038 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:46:55.570492 systemd-resolved[1318]: Defaulting to hostname 'linux'. May 13 23:46:55.578136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:46:55.579414 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:46:55.583587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:46:55.589589 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:46:55.599319 systemd-networkd[1385]: lo: Link UP May 13 23:46:55.599514 systemd-networkd[1385]: lo: Gained carrier May 13 23:46:55.608441 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:46:55.609770 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:46:55.611902 systemd-networkd[1385]: Enumeration completed May 13 23:46:55.612022 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:46:55.613635 systemd[1]: Reached target network.target - Network. May 13 23:46:55.616466 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:46:55.616475 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:46:55.617394 systemd-networkd[1385]: eth0: Link UP May 13 23:46:55.617415 systemd-networkd[1385]: eth0: Gained carrier May 13 23:46:55.617430 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:46:55.619469 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:46:55.623632 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:46:55.625204 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:46:55.635227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:46:55.638680 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:46:55.640379 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. May 13 23:46:55.642440 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:46:55.642505 systemd-timesyncd[1400]: Initial clock synchronization to Tue 2025-05-13 23:46:55.249142 UTC. May 13 23:46:55.649420 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:46:55.651105 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:46:55.656876 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:46:55.680239 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:46:55.693427 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:46:55.727996 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:46:55.729510 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:46:55.730574 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:46:55.731754 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:46:55.732978 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:46:55.734337 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:46:55.735528 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:46:55.736920 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:46:55.738135 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:46:55.738170 systemd[1]: Reached target paths.target - Path Units. May 13 23:46:55.739050 systemd[1]: Reached target timers.target - Timer Units. May 13 23:46:55.740888 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:46:55.743238 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:46:55.746988 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:46:55.748942 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:46:55.750227 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:46:55.754422 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:46:55.756158 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:46:55.758701 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:46:55.760356 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:46:55.761560 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:46:55.762506 systemd[1]: Reached target basic.target - Basic System. May 13 23:46:55.763457 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:46:55.763492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:46:55.764487 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:46:55.766363 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:46:55.767574 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:46:55.770267 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:46:55.775664 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:46:55.776776 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:46:55.778722 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:46:55.779744 jq[1431]: false May 13 23:46:55.781411 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:46:55.789606 dbus-daemon[1430]: [system] SELinux support is enabled May 13 23:46:55.799696 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:46:55.803573 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:46:55.807841 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:46:55.809869 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:46:55.810337 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:46:55.813570 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:46:55.816205 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:46:55.818021 extend-filesystems[1432]: Found loop3 May 13 23:46:55.819003 extend-filesystems[1432]: Found loop4 May 13 23:46:55.819003 extend-filesystems[1432]: Found loop5 May 13 23:46:55.819003 extend-filesystems[1432]: Found vda May 13 23:46:55.819003 extend-filesystems[1432]: Found vda1 May 13 23:46:55.819003 extend-filesystems[1432]: Found vda2 May 13 23:46:55.819003 extend-filesystems[1432]: Found vda3 May 13 23:46:55.819003 extend-filesystems[1432]: Found usr May 13 23:46:55.819003 extend-filesystems[1432]: Found vda4 May 13 23:46:55.819003 extend-filesystems[1432]: Found vda6 May 13 23:46:55.819003 extend-filesystems[1432]: Found vda7 May 13 23:46:55.819003 extend-filesystems[1432]: Found vda9 May 13 23:46:55.819003 extend-filesystems[1432]: Checking size of /dev/vda9 May 13 23:46:55.818498 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:46:55.841801 extend-filesystems[1432]: Resized partition /dev/vda9 May 13 23:46:55.825451 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:46:55.845607 jq[1447]: true May 13 23:46:55.833527 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:46:55.833719 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:46:55.833966 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:46:55.834133 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:46:55.836421 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:46:55.836594 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:46:55.846153 extend-filesystems[1456]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:46:55.862912 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:46:55.868555 jq[1455]: true May 13 23:46:55.862961 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:46:55.864340 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:46:55.864362 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:46:55.868782 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:46:55.877009 tar[1454]: linux-arm64/LICENSE May 13 23:46:55.877295 tar[1454]: linux-arm64/helm May 13 23:46:55.880075 update_engine[1444]: I20250513 23:46:55.879901 1444 main.cc:92] Flatcar Update Engine starting May 13 23:46:55.885364 systemd[1]: Started update-engine.service - Update Engine. May 13 23:46:55.885626 update_engine[1444]: I20250513 23:46:55.885584 1444 update_check_scheduler.cc:74] Next update check in 6m53s May 13 23:46:55.888534 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:46:55.895459 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1360) May 13 23:46:55.911084 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:46:55.911846 systemd-logind[1443]: New seat seat0. May 13 23:46:55.915961 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:46:55.924256 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:46:55.954438 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:46:55.973375 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:46:55.973375 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:46:55.973375 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:46:55.982677 extend-filesystems[1432]: Resized filesystem in /dev/vda9 May 13 23:46:55.976044 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:46:55.976284 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:46:55.988176 bash[1485]: Updated "/home/core/.ssh/authorized_keys" May 13 23:46:55.990225 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:46:55.992927 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:46:56.020853 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:46:56.128612 containerd[1465]: time="2025-05-13T23:46:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:46:56.129947 containerd[1465]: time="2025-05-13T23:46:56.129911880Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:46:56.141941 containerd[1465]: time="2025-05-13T23:46:56.141186681Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.874µs" May 13 23:46:56.141941 containerd[1465]: time="2025-05-13T23:46:56.141234761Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:46:56.141941 containerd[1465]: time="2025-05-13T23:46:56.141255111Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:46:56.141941 containerd[1465]: time="2025-05-13T23:46:56.141484742Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:46:56.141941 containerd[1465]: time="2025-05-13T23:46:56.141503380Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:46:56.141941 containerd[1465]: time="2025-05-13T23:46:56.141530881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:46:56.141941 containerd[1465]: time="2025-05-13T23:46:56.141599273Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:46:56.141941 containerd[1465]: time="2025-05-13T23:46:56.141611749Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:46:56.142734 containerd[1465]: time="2025-05-13T23:46:56.142227916Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:46:56.142734 containerd[1465]: time="2025-05-13T23:46:56.142425368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:46:56.142734 containerd[1465]: time="2025-05-13T23:46:56.142459259Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:46:56.142734 containerd[1465]: time="2025-05-13T23:46:56.142469909Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:46:56.142734 containerd[1465]: time="2025-05-13T23:46:56.142560857Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:46:56.142860 containerd[1465]: time="2025-05-13T23:46:56.142793645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:46:56.142860 containerd[1465]: time="2025-05-13T23:46:56.142830465Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:46:56.142860 containerd[1465]: time="2025-05-13T23:46:56.142844729Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:46:56.144085 containerd[1465]: time="2025-05-13T23:46:56.144051997Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:46:56.144584 containerd[1465]: time="2025-05-13T23:46:56.144556752Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:46:56.144695 containerd[1465]: time="2025-05-13T23:46:56.144675999Z" level=info msg="metadata content store policy set" policy=shared May 13 23:46:56.153469 containerd[1465]: time="2025-05-13T23:46:56.153426338Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:46:56.153532 containerd[1465]: time="2025-05-13T23:46:56.153503706Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:46:56.153532 containerd[1465]: time="2025-05-13T23:46:56.153521242Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:46:56.153610 containerd[1465]: time="2025-05-13T23:46:56.153535239Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:46:56.153610 containerd[1465]: time="2025-05-13T23:46:56.153548857Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:46:56.153610 containerd[1465]: time="2025-05-13T23:46:56.153559393Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:46:56.153610 containerd[1465]: time="2025-05-13T23:46:56.153572554Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:46:56.153610 containerd[1465]: time="2025-05-13T23:46:56.153585791Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:46:56.153610 containerd[1465]: time="2025-05-13T23:46:56.153597316Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:46:56.153610 containerd[1465]: time="2025-05-13T23:46:56.153608157Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:46:56.153743 containerd[1465]: time="2025-05-13T23:46:56.153625540Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:46:56.153743 containerd[1465]: time="2025-05-13T23:46:56.153646499Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153793019Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153820481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153837598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153849238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153861600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153874152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153893209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153902946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153914244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153924780Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.153935697Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.154314815Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.154330714Z" level=info msg="Start snapshots syncer" May 13 23:46:56.154368 containerd[1465]: time="2025-05-13T23:46:56.154367953Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:46:56.154641 containerd[1465]: time="2025-05-13T23:46:56.154602567Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:46:56.154733 containerd[1465]: time="2025-05-13T23:46:56.154656542Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:46:56.154759 containerd[1465]: time="2025-05-13T23:46:56.154729080Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:46:56.154850 containerd[1465]: time="2025-05-13T23:46:56.154829879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:46:56.154875 containerd[1465]: time="2025-05-13T23:46:56.154856809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:46:56.154875 containerd[1465]: time="2025-05-13T23:46:56.154867307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:46:56.154906 containerd[1465]: time="2025-05-13T23:46:56.154880811Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:46:56.154906 containerd[1465]: time="2025-05-13T23:46:56.154892830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:46:56.154906 containerd[1465]: time="2025-05-13T23:46:56.154903062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:46:56.154957 containerd[1465]: time="2025-05-13T23:46:56.154913066Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:46:56.154957 containerd[1465]: time="2025-05-13T23:46:56.154936307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:46:56.154991 containerd[1465]: time="2025-05-13T23:46:56.154955021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:46:56.154991 containerd[1465]: time="2025-05-13T23:46:56.154965976Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.155871759Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.155906183Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.155916758Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.155927598Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.155936195Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.155946389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.155957039Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.156083209Z" level=info msg="runtime interface created" May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.156092756Z" level=info msg="created NRI interface" May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.156102608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.156116035Z" level=info msg="Connect containerd service" May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.156148291Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:46:56.157404 containerd[1465]: time="2025-05-13T23:46:56.157225965Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:46:56.277496 tar[1454]: linux-arm64/README.md May 13 23:46:56.293877 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:46:56.294002 containerd[1465]: time="2025-05-13T23:46:56.293948807Z" level=info msg="Start subscribing containerd event" May 13 23:46:56.294037 containerd[1465]: time="2025-05-13T23:46:56.294022789Z" level=info msg="Start recovering state" May 13 23:46:56.294129 containerd[1465]: time="2025-05-13T23:46:56.294113775Z" level=info msg="Start event monitor" May 13 23:46:56.294156 containerd[1465]: time="2025-05-13T23:46:56.294133478Z" level=info msg="Start cni network conf syncer for default" May 13 23:46:56.294156 containerd[1465]: time="2025-05-13T23:46:56.294142455Z" level=info msg="Start streaming server" May 13 23:46:56.294156 containerd[1465]: time="2025-05-13T23:46:56.294151508Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:46:56.294223 containerd[1465]: time="2025-05-13T23:46:56.294158469Z" level=info msg="runtime interface starting up..." May 13 23:46:56.294223 containerd[1465]: time="2025-05-13T23:46:56.294164212Z" level=info msg="starting plugins..." May 13 23:46:56.294223 containerd[1465]: time="2025-05-13T23:46:56.294175547Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:46:56.294285 containerd[1465]: time="2025-05-13T23:46:56.294253600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:46:56.294322 containerd[1465]: time="2025-05-13T23:46:56.294308792Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:46:56.294379 containerd[1465]: time="2025-05-13T23:46:56.294365087Z" level=info msg="containerd successfully booted in 0.167295s" May 13 23:46:56.295541 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:46:56.690989 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:46:56.713823 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:46:56.718995 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:46:56.740853 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:46:56.741151 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:46:56.744906 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:46:56.757519 systemd-networkd[1385]: eth0: Gained IPv6LL May 13 23:46:56.760949 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:46:56.762808 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:46:56.765738 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:46:56.768655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:46:56.777954 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:46:56.780737 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:46:56.792027 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:46:56.794246 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:46:56.795874 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:46:56.804437 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:46:56.819179 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:46:56.820467 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:46:56.822157 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:46:57.403770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:46:57.405278 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:46:57.407852 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:46:57.411348 systemd[1]: Startup finished in 587ms (kernel) + 4.977s (initrd) + 3.761s (userspace) = 9.325s. May 13 23:46:57.918911 kubelet[1557]: E0513 23:46:57.918850 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:46:57.921174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:46:57.921320 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:46:57.921650 systemd[1]: kubelet.service: Consumed 830ms CPU time, 250.7M memory peak. May 13 23:47:01.211323 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:47:01.212543 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:60600.service - OpenSSH per-connection server daemon (10.0.0.1:60600). May 13 23:47:01.299806 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 60600 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:47:01.303737 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:01.316872 systemd-logind[1443]: New session 1 of user core. May 13 23:47:01.317857 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:47:01.318965 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:47:01.339221 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:47:01.344725 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:47:01.359854 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:47:01.362074 systemd-logind[1443]: New session c1 of user core. May 13 23:47:01.479941 systemd[1575]: Queued start job for default target default.target. May 13 23:47:01.495481 systemd[1575]: Created slice app.slice - User Application Slice. May 13 23:47:01.495651 systemd[1575]: Reached target paths.target - Paths. May 13 23:47:01.495771 systemd[1575]: Reached target timers.target - Timers. May 13 23:47:01.497241 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:47:01.507180 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:47:01.507244 systemd[1575]: Reached target sockets.target - Sockets. May 13 23:47:01.507288 systemd[1575]: Reached target basic.target - Basic System. May 13 23:47:01.507316 systemd[1575]: Reached target default.target - Main User Target. May 13 23:47:01.507340 systemd[1575]: Startup finished in 138ms. May 13 23:47:01.507625 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:47:01.522589 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:47:01.582207 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:60606.service - OpenSSH per-connection server daemon (10.0.0.1:60606). May 13 23:47:01.642475 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 60606 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:47:01.643142 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:01.647185 systemd-logind[1443]: New session 2 of user core. May 13 23:47:01.656662 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:47:01.708527 sshd[1588]: Connection closed by 10.0.0.1 port 60606 May 13 23:47:01.708974 sshd-session[1586]: pam_unix(sshd:session): session closed for user core May 13 23:47:01.730214 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:60606.service: Deactivated successfully. May 13 23:47:01.732739 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:47:01.733954 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. May 13 23:47:01.738199 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:60610.service - OpenSSH per-connection server daemon (10.0.0.1:60610). May 13 23:47:01.739165 systemd-logind[1443]: Removed session 2. May 13 23:47:01.791806 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 60610 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:47:01.793042 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:01.797147 systemd-logind[1443]: New session 3 of user core. May 13 23:47:01.804637 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:47:01.852730 sshd[1596]: Connection closed by 10.0.0.1 port 60610 May 13 23:47:01.853095 sshd-session[1593]: pam_unix(sshd:session): session closed for user core May 13 23:47:01.874308 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:60610.service: Deactivated successfully. May 13 23:47:01.876487 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:47:01.877625 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. May 13 23:47:01.880236 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:60618.service - OpenSSH per-connection server daemon (10.0.0.1:60618). May 13 23:47:01.882075 systemd-logind[1443]: Removed session 3. May 13 23:47:01.945148 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 60618 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:47:01.946022 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:01.953920 systemd-logind[1443]: New session 4 of user core. May 13 23:47:01.969624 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:47:02.027244 sshd[1604]: Connection closed by 10.0.0.1 port 60618 May 13 23:47:02.027007 sshd-session[1601]: pam_unix(sshd:session): session closed for user core May 13 23:47:02.044037 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:60618.service: Deactivated successfully. May 13 23:47:02.045842 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:47:02.048571 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. May 13 23:47:02.053041 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:60628.service - OpenSSH per-connection server daemon (10.0.0.1:60628). May 13 23:47:02.053928 systemd-logind[1443]: Removed session 4. May 13 23:47:02.122492 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 60628 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:47:02.123715 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:02.128956 systemd-logind[1443]: New session 5 of user core. May 13 23:47:02.137645 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:47:02.203349 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:47:02.203672 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:02.218423 sudo[1613]: pam_unix(sudo:session): session closed for user root May 13 23:47:02.221830 sshd[1612]: Connection closed by 10.0.0.1 port 60628 May 13 23:47:02.222536 sshd-session[1609]: pam_unix(sshd:session): session closed for user core May 13 23:47:02.235951 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:60628.service: Deactivated successfully. May 13 23:47:02.237770 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:47:02.239428 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. May 13 23:47:02.242582 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:60632.service - OpenSSH per-connection server daemon (10.0.0.1:60632). May 13 23:47:02.243705 systemd-logind[1443]: Removed session 5. May 13 23:47:02.306425 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 60632 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:47:02.307707 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:02.313063 systemd-logind[1443]: New session 6 of user core. May 13 23:47:02.319666 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:47:02.370989 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:47:02.371280 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:02.374971 sudo[1623]: pam_unix(sudo:session): session closed for user root May 13 23:47:02.380300 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:47:02.380631 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:02.390788 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:47:02.432482 augenrules[1645]: No rules May 13 23:47:02.433253 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:47:02.433505 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:47:02.435661 sudo[1622]: pam_unix(sudo:session): session closed for user root May 13 23:47:02.438004 sshd[1621]: Connection closed by 10.0.0.1 port 60632 May 13 23:47:02.437470 sshd-session[1618]: pam_unix(sshd:session): session closed for user core May 13 23:47:02.448292 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:60632.service: Deactivated successfully. May 13 23:47:02.450757 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:47:02.453229 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. May 13 23:47:02.455982 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:60638.service - OpenSSH per-connection server daemon (10.0.0.1:60638). May 13 23:47:02.457103 systemd-logind[1443]: Removed session 6. May 13 23:47:02.513441 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 60638 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:47:02.514325 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:02.521315 systemd-logind[1443]: New session 7 of user core. May 13 23:47:02.527646 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:47:02.578684 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:47:02.578994 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:03.007135 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:47:03.021871 (dockerd)[1678]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:47:03.328015 dockerd[1678]: time="2025-05-13T23:47:03.327866077Z" level=info msg="Starting up" May 13 23:47:03.330039 dockerd[1678]: time="2025-05-13T23:47:03.329993755Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:47:03.477459 dockerd[1678]: time="2025-05-13T23:47:03.477177282Z" level=info msg="Loading containers: start." May 13 23:47:03.635427 kernel: Initializing XFRM netlink socket May 13 23:47:03.706286 systemd-networkd[1385]: docker0: Link UP May 13 23:47:03.786334 dockerd[1678]: time="2025-05-13T23:47:03.786221262Z" level=info msg="Loading containers: done." May 13 23:47:03.809151 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3329073896-merged.mount: Deactivated successfully. May 13 23:47:03.817503 dockerd[1678]: time="2025-05-13T23:47:03.817451060Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:47:03.817665 dockerd[1678]: time="2025-05-13T23:47:03.817553369Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:47:03.818140 dockerd[1678]: time="2025-05-13T23:47:03.818123175Z" level=info msg="Daemon has completed initialization" May 13 23:47:03.860455 dockerd[1678]: time="2025-05-13T23:47:03.860204142Z" level=info msg="API listen on /run/docker.sock" May 13 23:47:03.860817 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:47:04.539169 containerd[1465]: time="2025-05-13T23:47:04.538859579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 23:47:05.218703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316460854.mount: Deactivated successfully. May 13 23:47:06.399672 containerd[1465]: time="2025-05-13T23:47:06.399605547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:06.401576 containerd[1465]: time="2025-05-13T23:47:06.401506094Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 13 23:47:06.402541 containerd[1465]: time="2025-05-13T23:47:06.402502227Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:06.411666 containerd[1465]: time="2025-05-13T23:47:06.411209286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:06.412074 containerd[1465]: time="2025-05-13T23:47:06.411877349Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.872964824s" May 13 23:47:06.412074 containerd[1465]: time="2025-05-13T23:47:06.411932863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 23:47:06.412674 containerd[1465]: time="2025-05-13T23:47:06.412638476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 23:47:07.561616 containerd[1465]: time="2025-05-13T23:47:07.561556218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:07.563322 containerd[1465]: time="2025-05-13T23:47:07.563257107Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 13 23:47:07.565845 containerd[1465]: time="2025-05-13T23:47:07.564535661Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:07.570686 containerd[1465]: time="2025-05-13T23:47:07.570634668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:07.571787 containerd[1465]: time="2025-05-13T23:47:07.571723747Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.159051814s" May 13 23:47:07.571787 containerd[1465]: time="2025-05-13T23:47:07.571759538Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 23:47:07.572636 containerd[1465]: time="2025-05-13T23:47:07.572238190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 23:47:08.171684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:47:08.173637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:08.319201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:08.322778 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:47:08.399869 kubelet[1949]: E0513 23:47:08.399805 1949 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:47:08.404271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:47:08.404688 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:47:08.405044 systemd[1]: kubelet.service: Consumed 152ms CPU time, 105.1M memory peak. May 13 23:47:08.778242 containerd[1465]: time="2025-05-13T23:47:08.777295872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:08.778242 containerd[1465]: time="2025-05-13T23:47:08.778025668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 13 23:47:08.778804 containerd[1465]: time="2025-05-13T23:47:08.778774434Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:08.782532 containerd[1465]: time="2025-05-13T23:47:08.782487176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:08.784091 containerd[1465]: time="2025-05-13T23:47:08.784043204Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.211769762s" May 13 23:47:08.784146 containerd[1465]: time="2025-05-13T23:47:08.784089740Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 23:47:08.784618 containerd[1465]: time="2025-05-13T23:47:08.784590422Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 23:47:09.696484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785032377.mount: Deactivated successfully. May 13 23:47:09.949778 containerd[1465]: time="2025-05-13T23:47:09.949651233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:09.951036 containerd[1465]: time="2025-05-13T23:47:09.950850214Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 23:47:09.952114 containerd[1465]: time="2025-05-13T23:47:09.951900929Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:09.953711 containerd[1465]: time="2025-05-13T23:47:09.953672397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:09.955087 containerd[1465]: time="2025-05-13T23:47:09.954916306Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.170289632s" May 13 23:47:09.955087 containerd[1465]: time="2025-05-13T23:47:09.954954850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 23:47:09.955972 containerd[1465]: time="2025-05-13T23:47:09.955695549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 23:47:10.488534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653110897.mount: Deactivated successfully. May 13 23:47:11.272928 containerd[1465]: time="2025-05-13T23:47:11.272176116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:11.272928 containerd[1465]: time="2025-05-13T23:47:11.272871998Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 13 23:47:11.273600 containerd[1465]: time="2025-05-13T23:47:11.273564106Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:11.276381 containerd[1465]: time="2025-05-13T23:47:11.276324786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:11.277913 containerd[1465]: time="2025-05-13T23:47:11.277457556Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.321727289s" May 13 23:47:11.277913 containerd[1465]: time="2025-05-13T23:47:11.277495623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 23:47:11.278133 containerd[1465]: time="2025-05-13T23:47:11.278099915Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:47:11.708224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3389073.mount: Deactivated successfully. May 13 23:47:11.714191 containerd[1465]: time="2025-05-13T23:47:11.714129727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:11.714691 containerd[1465]: time="2025-05-13T23:47:11.714635198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 23:47:11.717241 containerd[1465]: time="2025-05-13T23:47:11.717190369Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:11.723536 containerd[1465]: time="2025-05-13T23:47:11.723480526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:11.724752 containerd[1465]: time="2025-05-13T23:47:11.724367494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 446.231937ms" May 13 23:47:11.724752 containerd[1465]: time="2025-05-13T23:47:11.724415375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 23:47:11.724876 containerd[1465]: time="2025-05-13T23:47:11.724845429Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 23:47:12.287047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215585803.mount: Deactivated successfully. May 13 23:47:13.980874 containerd[1465]: time="2025-05-13T23:47:13.980826697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:13.981744 containerd[1465]: time="2025-05-13T23:47:13.981381431Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 13 23:47:13.983962 containerd[1465]: time="2025-05-13T23:47:13.982464553Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:13.985342 containerd[1465]: time="2025-05-13T23:47:13.985311090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:13.986683 containerd[1465]: time="2025-05-13T23:47:13.986654167Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.26177364s" May 13 23:47:13.986752 containerd[1465]: time="2025-05-13T23:47:13.986688791Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 23:47:18.654337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:47:18.656109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:18.830735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:18.842786 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:47:18.890802 kubelet[2112]: E0513 23:47:18.890741 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:47:18.893458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:47:18.893619 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:47:18.894203 systemd[1]: kubelet.service: Consumed 141ms CPU time, 104.4M memory peak. May 13 23:47:19.502313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:19.502891 systemd[1]: kubelet.service: Consumed 141ms CPU time, 104.4M memory peak. May 13 23:47:19.505694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:19.537486 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit session-7.scope)... May 13 23:47:19.537507 systemd[1]: Reloading... May 13 23:47:19.622442 zram_generator::config[2170]: No configuration found. May 13 23:47:19.848695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:19.925412 systemd[1]: Reloading finished in 387 ms. May 13 23:47:19.984596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:19.987969 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:47:19.988205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:19.988255 systemd[1]: kubelet.service: Consumed 106ms CPU time, 90.2M memory peak. May 13 23:47:19.989847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:20.117043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:20.121875 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:47:20.159340 kubelet[2219]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:20.159756 kubelet[2219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:47:20.159816 kubelet[2219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:20.159965 kubelet[2219]: I0513 23:47:20.159933 2219 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:47:21.109425 kubelet[2219]: I0513 23:47:21.108106 2219 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:47:21.109425 kubelet[2219]: I0513 23:47:21.108141 2219 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:47:21.109425 kubelet[2219]: I0513 23:47:21.108446 2219 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:47:21.143522 kubelet[2219]: E0513 23:47:21.142954 2219 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:21.145093 kubelet[2219]: I0513 23:47:21.145065 2219 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:47:21.166819 kubelet[2219]: I0513 23:47:21.166787 2219 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:47:21.169809 kubelet[2219]: I0513 23:47:21.169789 2219 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:47:21.173680 kubelet[2219]: I0513 23:47:21.173621 2219 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:47:21.173917 kubelet[2219]: I0513 23:47:21.173687 2219 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:47:21.174069 kubelet[2219]: I0513 23:47:21.174041 2219 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:47:21.174069 kubelet[2219]: I0513 23:47:21.174054 2219 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:47:21.174327 kubelet[2219]: I0513 23:47:21.174305 2219 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:21.180849 kubelet[2219]: I0513 23:47:21.180803 2219 kubelet.go:446] "Attempting to sync node with API server" May 13 23:47:21.180902 kubelet[2219]: I0513 23:47:21.180854 2219 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:47:21.180902 kubelet[2219]: I0513 23:47:21.180884 2219 kubelet.go:352] "Adding apiserver pod source" May 13 23:47:21.180902 kubelet[2219]: I0513 23:47:21.180900 2219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:47:21.182366 kubelet[2219]: W0513 23:47:21.182306 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 13 23:47:21.182410 kubelet[2219]: E0513 23:47:21.182368 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:21.185137 kubelet[2219]: W0513 23:47:21.185095 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 13 23:47:21.185202 kubelet[2219]: E0513 23:47:21.185141 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:21.186430 kubelet[2219]: I0513 23:47:21.186289 2219 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:47:21.187195 kubelet[2219]: I0513 23:47:21.187169 2219 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:47:21.187405 kubelet[2219]: W0513 23:47:21.187384 2219 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:47:21.188747 kubelet[2219]: I0513 23:47:21.188544 2219 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:47:21.188747 kubelet[2219]: I0513 23:47:21.188583 2219 server.go:1287] "Started kubelet" May 13 23:47:21.189260 kubelet[2219]: I0513 23:47:21.189193 2219 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:47:21.190470 kubelet[2219]: I0513 23:47:21.190424 2219 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:47:21.191162 kubelet[2219]: I0513 23:47:21.191128 2219 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:47:21.191862 kubelet[2219]: I0513 23:47:21.191832 2219 server.go:490] "Adding debug handlers to kubelet server" May 13 23:47:21.191862 kubelet[2219]: I0513 23:47:21.191858 2219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:47:21.197530 kubelet[2219]: I0513 23:47:21.197473 2219 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:47:21.200475 kubelet[2219]: E0513 23:47:21.199341 2219 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3afd91b25f14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:47:21.188564756 +0000 UTC m=+1.062928704,LastTimestamp:2025-05-13 23:47:21.188564756 +0000 UTC m=+1.062928704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:47:21.200629 kubelet[2219]: I0513 23:47:21.200601 2219 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:47:21.200926 kubelet[2219]: E0513 23:47:21.200384 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:21.201134 kubelet[2219]: I0513 23:47:21.201109 2219 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:47:21.201215 kubelet[2219]: I0513 23:47:21.201201 2219 reconciler.go:26] "Reconciler: start to sync state" May 13 23:47:21.207593 kubelet[2219]: W0513 23:47:21.202814 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 13 23:47:21.207593 kubelet[2219]: E0513 23:47:21.202871 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:21.209219 kubelet[2219]: E0513 23:47:21.209180 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" May 13 23:47:21.211655 kubelet[2219]: E0513 23:47:21.211631 2219 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:47:21.212055 kubelet[2219]: I0513 23:47:21.212033 2219 factory.go:221] Registration of the containerd container factory successfully May 13 23:47:21.212055 kubelet[2219]: I0513 23:47:21.212055 2219 factory.go:221] Registration of the systemd container factory successfully May 13 23:47:21.212163 kubelet[2219]: I0513 23:47:21.212144 2219 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:47:21.221757 kubelet[2219]: I0513 23:47:21.221581 2219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:47:21.223124 kubelet[2219]: I0513 23:47:21.223103 2219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:47:21.223215 kubelet[2219]: I0513 23:47:21.223204 2219 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:47:21.223301 kubelet[2219]: I0513 23:47:21.223290 2219 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:47:21.223345 kubelet[2219]: I0513 23:47:21.223338 2219 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:47:21.223503 kubelet[2219]: E0513 23:47:21.223484 2219 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:47:21.224097 kubelet[2219]: W0513 23:47:21.224048 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 13 23:47:21.224176 kubelet[2219]: E0513 23:47:21.224102 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:21.228337 kubelet[2219]: I0513 23:47:21.228295 2219 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:47:21.228337 kubelet[2219]: I0513 23:47:21.228314 2219 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:47:21.228337 kubelet[2219]: I0513 23:47:21.228333 2219 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:21.296647 kubelet[2219]: I0513 23:47:21.296603 2219 policy_none.go:49] "None policy: Start" May 13 23:47:21.296647 kubelet[2219]: I0513 23:47:21.296635 2219 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:47:21.296647 kubelet[2219]: I0513 23:47:21.296647 2219 state_mem.go:35] "Initializing new in-memory state store" May 13 23:47:21.301824 kubelet[2219]: E0513 23:47:21.301778 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:21.302339 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:47:21.317150 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:47:21.319964 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:47:21.323627 kubelet[2219]: E0513 23:47:21.323581 2219 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:47:21.331850 kubelet[2219]: I0513 23:47:21.331813 2219 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:47:21.332086 kubelet[2219]: I0513 23:47:21.332058 2219 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:47:21.332122 kubelet[2219]: I0513 23:47:21.332078 2219 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:47:21.332407 kubelet[2219]: I0513 23:47:21.332375 2219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:47:21.334328 kubelet[2219]: E0513 23:47:21.334245 2219 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:47:21.334328 kubelet[2219]: E0513 23:47:21.334289 2219 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:47:21.410536 kubelet[2219]: E0513 23:47:21.410381 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" May 13 23:47:21.434265 kubelet[2219]: I0513 23:47:21.433844 2219 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:21.434433 kubelet[2219]: E0513 23:47:21.434316 2219 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 13 23:47:21.541106 systemd[1]: Created slice kubepods-burstable-poda59a3965872e24de5a69b5ba792f14b7.slice - libcontainer container kubepods-burstable-poda59a3965872e24de5a69b5ba792f14b7.slice. May 13 23:47:21.553417 kubelet[2219]: E0513 23:47:21.552686 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:21.557137 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 23:47:21.558877 kubelet[2219]: E0513 23:47:21.558843 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:21.569941 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 23:47:21.572829 kubelet[2219]: E0513 23:47:21.572805 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:21.604877 kubelet[2219]: I0513 23:47:21.604588 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a59a3965872e24de5a69b5ba792f14b7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a59a3965872e24de5a69b5ba792f14b7\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:21.604877 kubelet[2219]: I0513 23:47:21.604634 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a59a3965872e24de5a69b5ba792f14b7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a59a3965872e24de5a69b5ba792f14b7\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:21.604877 kubelet[2219]: I0513 23:47:21.604664 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a59a3965872e24de5a69b5ba792f14b7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a59a3965872e24de5a69b5ba792f14b7\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:21.604877 kubelet[2219]: I0513 23:47:21.604686 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:21.604877 kubelet[2219]: I0513 23:47:21.604704 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:21.605136 kubelet[2219]: I0513 23:47:21.604729 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:21.605136 kubelet[2219]: I0513 23:47:21.604761 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:21.605136 kubelet[2219]: I0513 23:47:21.604786 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:21.605136 kubelet[2219]: I0513 23:47:21.604801 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 23:47:21.636658 kubelet[2219]: I0513 23:47:21.636502 2219 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:21.636890 kubelet[2219]: E0513 23:47:21.636844 2219 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 13 23:47:21.811742 kubelet[2219]: E0513 23:47:21.811615 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" May 13 23:47:21.858984 containerd[1465]: time="2025-05-13T23:47:21.858938896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a59a3965872e24de5a69b5ba792f14b7,Namespace:kube-system,Attempt:0,}" May 13 23:47:21.859773 containerd[1465]: time="2025-05-13T23:47:21.859742015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 23:47:21.874740 containerd[1465]: time="2025-05-13T23:47:21.874686300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 23:47:21.906086 containerd[1465]: time="2025-05-13T23:47:21.905933940Z" level=info msg="connecting to shim 8e8c5c0c59a001b5ffe502dd9aaaa26896b19f2252b67f8bdcf93f6812481f3b" address="unix:///run/containerd/s/6d2876a98707b39038b46ef3671c2419a7bc455a4ab0777bc0af35ac65a262b9" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:21.917577 containerd[1465]: time="2025-05-13T23:47:21.917032854Z" level=info msg="connecting to shim 3e8bd1acc20205b4311ae3a6282823727b8a6edb043e32e09137dfa3e5780a73" address="unix:///run/containerd/s/38dc09238658f49c432cea0021a4a0d1a902e7e13ab77b69b44c7e3af6c21197" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:21.924497 containerd[1465]: time="2025-05-13T23:47:21.924394849Z" level=info msg="connecting to shim 12bdc41044e5bb14e37db27d0f4403092fd8d4c0c7b5459f0df97c71829177c2" address="unix:///run/containerd/s/ca1ad8fe242067fd751ceafd97121d2c2fce1007546bb911d6d2ae1a9c950c59" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:21.948652 systemd[1]: Started cri-containerd-3e8bd1acc20205b4311ae3a6282823727b8a6edb043e32e09137dfa3e5780a73.scope - libcontainer container 3e8bd1acc20205b4311ae3a6282823727b8a6edb043e32e09137dfa3e5780a73. May 13 23:47:21.950091 systemd[1]: Started cri-containerd-8e8c5c0c59a001b5ffe502dd9aaaa26896b19f2252b67f8bdcf93f6812481f3b.scope - libcontainer container 8e8c5c0c59a001b5ffe502dd9aaaa26896b19f2252b67f8bdcf93f6812481f3b. May 13 23:47:21.954173 systemd[1]: Started cri-containerd-12bdc41044e5bb14e37db27d0f4403092fd8d4c0c7b5459f0df97c71829177c2.scope - libcontainer container 12bdc41044e5bb14e37db27d0f4403092fd8d4c0c7b5459f0df97c71829177c2. May 13 23:47:22.003437 containerd[1465]: time="2025-05-13T23:47:22.003380628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a59a3965872e24de5a69b5ba792f14b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e8bd1acc20205b4311ae3a6282823727b8a6edb043e32e09137dfa3e5780a73\"" May 13 23:47:22.007356 containerd[1465]: time="2025-05-13T23:47:22.007308313Z" level=info msg="CreateContainer within sandbox \"3e8bd1acc20205b4311ae3a6282823727b8a6edb043e32e09137dfa3e5780a73\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:47:22.017280 containerd[1465]: time="2025-05-13T23:47:22.016742993Z" level=info msg="Container dd81f0584095961a17d3f40ed616c9ec29a1b8f692a3a6ed29446893b3668d81: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:22.018681 containerd[1465]: time="2025-05-13T23:47:22.018636902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"12bdc41044e5bb14e37db27d0f4403092fd8d4c0c7b5459f0df97c71829177c2\"" May 13 23:47:22.021688 containerd[1465]: time="2025-05-13T23:47:22.021653577Z" level=info msg="CreateContainer within sandbox \"12bdc41044e5bb14e37db27d0f4403092fd8d4c0c7b5459f0df97c71829177c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:47:22.030782 containerd[1465]: time="2025-05-13T23:47:22.030713589Z" level=info msg="CreateContainer within sandbox \"3e8bd1acc20205b4311ae3a6282823727b8a6edb043e32e09137dfa3e5780a73\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd81f0584095961a17d3f40ed616c9ec29a1b8f692a3a6ed29446893b3668d81\"" May 13 23:47:22.031346 containerd[1465]: time="2025-05-13T23:47:22.031309080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e8c5c0c59a001b5ffe502dd9aaaa26896b19f2252b67f8bdcf93f6812481f3b\"" May 13 23:47:22.035511 containerd[1465]: time="2025-05-13T23:47:22.035474761Z" level=info msg="CreateContainer within sandbox \"8e8c5c0c59a001b5ffe502dd9aaaa26896b19f2252b67f8bdcf93f6812481f3b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:47:22.036871 containerd[1465]: time="2025-05-13T23:47:22.036844071Z" level=info msg="StartContainer for \"dd81f0584095961a17d3f40ed616c9ec29a1b8f692a3a6ed29446893b3668d81\"" May 13 23:47:22.038449 containerd[1465]: time="2025-05-13T23:47:22.038414794Z" level=info msg="connecting to shim dd81f0584095961a17d3f40ed616c9ec29a1b8f692a3a6ed29446893b3668d81" address="unix:///run/containerd/s/38dc09238658f49c432cea0021a4a0d1a902e7e13ab77b69b44c7e3af6c21197" protocol=ttrpc version=3 May 13 23:47:22.038817 kubelet[2219]: I0513 23:47:22.038790 2219 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:22.039260 kubelet[2219]: E0513 23:47:22.039232 2219 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 13 23:47:22.046668 containerd[1465]: time="2025-05-13T23:47:22.046236535Z" level=info msg="Container 95e4ec5774b5ef312fdf92e3fcdb937032cee62d0558e90a5b3f1d29b93a0b80: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:22.046838 kubelet[2219]: W0513 23:47:22.046789 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 13 23:47:22.046926 kubelet[2219]: E0513 23:47:22.046857 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:22.048267 containerd[1465]: time="2025-05-13T23:47:22.048231051Z" level=info msg="Container 6d575622b5610203f7d5ae9bd7c7a6d439aec5d6970dcf59a22c1033b1b12aab: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:22.054268 containerd[1465]: time="2025-05-13T23:47:22.054214278Z" level=info msg="CreateContainer within sandbox \"12bdc41044e5bb14e37db27d0f4403092fd8d4c0c7b5459f0df97c71829177c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"95e4ec5774b5ef312fdf92e3fcdb937032cee62d0558e90a5b3f1d29b93a0b80\"" May 13 23:47:22.054879 containerd[1465]: time="2025-05-13T23:47:22.054851226Z" level=info msg="StartContainer for \"95e4ec5774b5ef312fdf92e3fcdb937032cee62d0558e90a5b3f1d29b93a0b80\"" May 13 23:47:22.055999 containerd[1465]: time="2025-05-13T23:47:22.055964766Z" level=info msg="connecting to shim 95e4ec5774b5ef312fdf92e3fcdb937032cee62d0558e90a5b3f1d29b93a0b80" address="unix:///run/containerd/s/ca1ad8fe242067fd751ceafd97121d2c2fce1007546bb911d6d2ae1a9c950c59" protocol=ttrpc version=3 May 13 23:47:22.058523 containerd[1465]: time="2025-05-13T23:47:22.058480127Z" level=info msg="CreateContainer within sandbox \"8e8c5c0c59a001b5ffe502dd9aaaa26896b19f2252b67f8bdcf93f6812481f3b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6d575622b5610203f7d5ae9bd7c7a6d439aec5d6970dcf59a22c1033b1b12aab\"" May 13 23:47:22.059050 containerd[1465]: time="2025-05-13T23:47:22.059022180Z" level=info msg="StartContainer for \"6d575622b5610203f7d5ae9bd7c7a6d439aec5d6970dcf59a22c1033b1b12aab\"" May 13 23:47:22.059612 systemd[1]: Started cri-containerd-dd81f0584095961a17d3f40ed616c9ec29a1b8f692a3a6ed29446893b3668d81.scope - libcontainer container dd81f0584095961a17d3f40ed616c9ec29a1b8f692a3a6ed29446893b3668d81. May 13 23:47:22.061535 containerd[1465]: time="2025-05-13T23:47:22.061438212Z" level=info msg="connecting to shim 6d575622b5610203f7d5ae9bd7c7a6d439aec5d6970dcf59a22c1033b1b12aab" address="unix:///run/containerd/s/6d2876a98707b39038b46ef3671c2419a7bc455a4ab0777bc0af35ac65a262b9" protocol=ttrpc version=3 May 13 23:47:22.083938 systemd[1]: Started cri-containerd-95e4ec5774b5ef312fdf92e3fcdb937032cee62d0558e90a5b3f1d29b93a0b80.scope - libcontainer container 95e4ec5774b5ef312fdf92e3fcdb937032cee62d0558e90a5b3f1d29b93a0b80. May 13 23:47:22.090345 systemd[1]: Started cri-containerd-6d575622b5610203f7d5ae9bd7c7a6d439aec5d6970dcf59a22c1033b1b12aab.scope - libcontainer container 6d575622b5610203f7d5ae9bd7c7a6d439aec5d6970dcf59a22c1033b1b12aab. May 13 23:47:22.127407 containerd[1465]: time="2025-05-13T23:47:22.127335869Z" level=info msg="StartContainer for \"dd81f0584095961a17d3f40ed616c9ec29a1b8f692a3a6ed29446893b3668d81\" returns successfully" May 13 23:47:22.165177 containerd[1465]: time="2025-05-13T23:47:22.165123032Z" level=info msg="StartContainer for \"95e4ec5774b5ef312fdf92e3fcdb937032cee62d0558e90a5b3f1d29b93a0b80\" returns successfully" May 13 23:47:22.192432 containerd[1465]: time="2025-05-13T23:47:22.192359340Z" level=info msg="StartContainer for \"6d575622b5610203f7d5ae9bd7c7a6d439aec5d6970dcf59a22c1033b1b12aab\" returns successfully" May 13 23:47:22.251364 kubelet[2219]: E0513 23:47:22.251328 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:22.265948 kubelet[2219]: E0513 23:47:22.265917 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:22.271203 kubelet[2219]: E0513 23:47:22.271154 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:22.377813 kubelet[2219]: W0513 23:47:22.377600 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 13 23:47:22.377813 kubelet[2219]: E0513 23:47:22.377694 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:22.541357 kubelet[2219]: W0513 23:47:22.541260 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 13 23:47:22.541357 kubelet[2219]: E0513 23:47:22.541328 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:22.615777 kubelet[2219]: E0513 23:47:22.615708 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" May 13 23:47:22.746565 kubelet[2219]: W0513 23:47:22.746377 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 13 23:47:22.746565 kubelet[2219]: E0513 23:47:22.746465 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 13 23:47:22.841076 kubelet[2219]: I0513 23:47:22.840611 2219 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:23.271991 kubelet[2219]: E0513 23:47:23.271956 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:23.272992 kubelet[2219]: E0513 23:47:23.272970 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:24.102148 kubelet[2219]: I0513 23:47:24.102093 2219 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 23:47:24.102148 kubelet[2219]: E0513 23:47:24.102129 2219 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 23:47:24.106854 kubelet[2219]: E0513 23:47:24.106811 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:24.207033 kubelet[2219]: E0513 23:47:24.206990 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:24.307349 kubelet[2219]: E0513 23:47:24.307313 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:24.328860 kubelet[2219]: E0513 23:47:24.328796 2219 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:47:24.408676 kubelet[2219]: E0513 23:47:24.408245 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:24.508520 kubelet[2219]: E0513 23:47:24.508471 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:24.609037 kubelet[2219]: E0513 23:47:24.608998 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:24.710162 kubelet[2219]: E0513 23:47:24.709992 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:24.810522 kubelet[2219]: E0513 23:47:24.810469 2219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:24.902362 kubelet[2219]: I0513 23:47:24.902315 2219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:47:24.914836 kubelet[2219]: E0513 23:47:24.914801 2219 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 23:47:24.914836 kubelet[2219]: I0513 23:47:24.914833 2219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:24.917198 kubelet[2219]: E0513 23:47:24.916989 2219 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:24.917198 kubelet[2219]: I0513 23:47:24.917012 2219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:47:24.918734 kubelet[2219]: E0513 23:47:24.918684 2219 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 23:47:25.194136 kubelet[2219]: I0513 23:47:25.194023 2219 apiserver.go:52] "Watching apiserver" May 13 23:47:25.201229 kubelet[2219]: I0513 23:47:25.201189 2219 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:47:25.909706 kubelet[2219]: I0513 23:47:25.909467 2219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:26.184007 kubelet[2219]: I0513 23:47:26.183876 2219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:47:26.596970 systemd[1]: Reload requested from client PID 2497 ('systemctl') (unit session-7.scope)... May 13 23:47:26.596988 systemd[1]: Reloading... May 13 23:47:26.667458 zram_generator::config[2541]: No configuration found. May 13 23:47:26.763003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:26.848916 systemd[1]: Reloading finished in 251 ms. May 13 23:47:26.873033 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:26.888570 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:47:26.888829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:26.888891 systemd[1]: kubelet.service: Consumed 1.521s CPU time, 125.2M memory peak. May 13 23:47:26.891271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:27.033466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:27.046766 (kubelet)[2583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:47:27.089796 kubelet[2583]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:27.089796 kubelet[2583]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:47:27.089796 kubelet[2583]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:27.090143 kubelet[2583]: I0513 23:47:27.089841 2583 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:47:27.095644 kubelet[2583]: I0513 23:47:27.095599 2583 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:47:27.095644 kubelet[2583]: I0513 23:47:27.095626 2583 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:47:27.095937 kubelet[2583]: I0513 23:47:27.095908 2583 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:47:27.097212 kubelet[2583]: I0513 23:47:27.097186 2583 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:47:27.100326 kubelet[2583]: I0513 23:47:27.100207 2583 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:47:27.104770 kubelet[2583]: I0513 23:47:27.104730 2583 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:47:27.108328 kubelet[2583]: I0513 23:47:27.107517 2583 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:47:27.108328 kubelet[2583]: I0513 23:47:27.107755 2583 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:47:27.108328 kubelet[2583]: I0513 23:47:27.107784 2583 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:47:27.108328 kubelet[2583]: I0513 23:47:27.107962 2583 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:47:27.108552 kubelet[2583]: I0513 23:47:27.107970 2583 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:47:27.108552 kubelet[2583]: I0513 23:47:27.108029 2583 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:27.108552 kubelet[2583]: I0513 23:47:27.108169 2583 kubelet.go:446] "Attempting to sync node with API server" May 13 23:47:27.108552 kubelet[2583]: I0513 23:47:27.108182 2583 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:47:27.108552 kubelet[2583]: I0513 23:47:27.108202 2583 kubelet.go:352] "Adding apiserver pod source" May 13 23:47:27.108552 kubelet[2583]: I0513 23:47:27.108215 2583 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:47:27.109352 kubelet[2583]: I0513 23:47:27.109330 2583 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:47:27.109879 kubelet[2583]: I0513 23:47:27.109829 2583 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:47:27.110328 kubelet[2583]: I0513 23:47:27.110234 2583 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:47:27.110328 kubelet[2583]: I0513 23:47:27.110269 2583 server.go:1287] "Started kubelet" May 13 23:47:27.111085 kubelet[2583]: I0513 23:47:27.110828 2583 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:47:27.115465 kubelet[2583]: I0513 23:47:27.112353 2583 server.go:490] "Adding debug handlers to kubelet server" May 13 23:47:27.117526 kubelet[2583]: I0513 23:47:27.110842 2583 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:47:27.118767 kubelet[2583]: I0513 23:47:27.117733 2583 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:47:27.118767 kubelet[2583]: I0513 23:47:27.114139 2583 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:47:27.118767 kubelet[2583]: I0513 23:47:27.114021 2583 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:47:27.123023 kubelet[2583]: E0513 23:47:27.122988 2583 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:47:27.126767 kubelet[2583]: I0513 23:47:27.126740 2583 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:47:27.127636 kubelet[2583]: I0513 23:47:27.126985 2583 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:47:27.127636 kubelet[2583]: I0513 23:47:27.127236 2583 reconciler.go:26] "Reconciler: start to sync state" May 13 23:47:27.128692 kubelet[2583]: E0513 23:47:27.128659 2583 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:47:27.130751 kubelet[2583]: I0513 23:47:27.130720 2583 factory.go:221] Registration of the systemd container factory successfully May 13 23:47:27.130833 kubelet[2583]: I0513 23:47:27.130814 2583 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:47:27.133015 kubelet[2583]: I0513 23:47:27.132829 2583 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:47:27.135423 kubelet[2583]: I0513 23:47:27.135345 2583 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:47:27.135423 kubelet[2583]: I0513 23:47:27.135374 2583 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:47:27.135595 kubelet[2583]: I0513 23:47:27.135452 2583 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:47:27.135595 kubelet[2583]: I0513 23:47:27.135461 2583 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:47:27.135595 kubelet[2583]: E0513 23:47:27.135512 2583 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:47:27.137061 kubelet[2583]: I0513 23:47:27.136693 2583 factory.go:221] Registration of the containerd container factory successfully May 13 23:47:27.167699 kubelet[2583]: I0513 23:47:27.167673 2583 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:47:27.167699 kubelet[2583]: I0513 23:47:27.167691 2583 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:47:27.167849 kubelet[2583]: I0513 23:47:27.167711 2583 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:27.167874 kubelet[2583]: I0513 23:47:27.167861 2583 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:47:27.167896 kubelet[2583]: I0513 23:47:27.167871 2583 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:47:27.167896 kubelet[2583]: I0513 23:47:27.167888 2583 policy_none.go:49] "None policy: Start" May 13 23:47:27.167896 kubelet[2583]: I0513 23:47:27.167896 2583 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:47:27.167954 kubelet[2583]: I0513 23:47:27.167905 2583 state_mem.go:35] "Initializing new in-memory state store" May 13 23:47:27.168007 kubelet[2583]: I0513 23:47:27.167992 2583 state_mem.go:75] "Updated machine memory state" May 13 23:47:27.171614 kubelet[2583]: I0513 23:47:27.171589 2583 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:47:27.171761 kubelet[2583]: I0513 23:47:27.171746 2583 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:47:27.171803 kubelet[2583]: I0513 23:47:27.171761 2583 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:47:27.172526 kubelet[2583]: I0513 23:47:27.172464 2583 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:47:27.173044 kubelet[2583]: E0513 23:47:27.172904 2583 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:47:27.237040 kubelet[2583]: I0513 23:47:27.236977 2583 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:47:27.237324 kubelet[2583]: I0513 23:47:27.237238 2583 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:27.237589 kubelet[2583]: I0513 23:47:27.237558 2583 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:47:27.252463 kubelet[2583]: E0513 23:47:27.252343 2583 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:47:27.253097 kubelet[2583]: E0513 23:47:27.253014 2583 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:27.273458 kubelet[2583]: I0513 23:47:27.273422 2583 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:47:27.303952 kubelet[2583]: I0513 23:47:27.303787 2583 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 23:47:27.303952 kubelet[2583]: I0513 23:47:27.303880 2583 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 23:47:27.428443 kubelet[2583]: I0513 23:47:27.428228 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 23:47:27.428443 kubelet[2583]: I0513 23:47:27.428262 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a59a3965872e24de5a69b5ba792f14b7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a59a3965872e24de5a69b5ba792f14b7\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:27.428443 kubelet[2583]: I0513 23:47:27.428283 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a59a3965872e24de5a69b5ba792f14b7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a59a3965872e24de5a69b5ba792f14b7\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:27.428443 kubelet[2583]: I0513 23:47:27.428302 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:27.428443 kubelet[2583]: I0513 23:47:27.428321 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:27.428645 kubelet[2583]: I0513 23:47:27.428337 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a59a3965872e24de5a69b5ba792f14b7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a59a3965872e24de5a69b5ba792f14b7\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:27.428645 kubelet[2583]: I0513 23:47:27.428361 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:27.428645 kubelet[2583]: I0513 23:47:27.428382 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:27.428645 kubelet[2583]: I0513 23:47:27.428409 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:28.111593 kubelet[2583]: I0513 23:47:28.109662 2583 apiserver.go:52] "Watching apiserver" May 13 23:47:28.127205 kubelet[2583]: I0513 23:47:28.127155 2583 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:47:28.150818 kubelet[2583]: I0513 23:47:28.150547 2583 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:47:28.150818 kubelet[2583]: I0513 23:47:28.150660 2583 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:47:28.156882 kubelet[2583]: E0513 23:47:28.156761 2583 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 23:47:28.158937 kubelet[2583]: E0513 23:47:28.158897 2583 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:47:28.187837 kubelet[2583]: I0513 23:47:28.187760 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.187741823 podStartE2EDuration="1.187741823s" podCreationTimestamp="2025-05-13 23:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:28.173345365 +0000 UTC m=+1.123095556" watchObservedRunningTime="2025-05-13 23:47:28.187741823 +0000 UTC m=+1.137492054" May 13 23:47:28.197919 kubelet[2583]: I0513 23:47:28.197868 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.197848075 podStartE2EDuration="3.197848075s" podCreationTimestamp="2025-05-13 23:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:28.187887084 +0000 UTC m=+1.137637315" watchObservedRunningTime="2025-05-13 23:47:28.197848075 +0000 UTC m=+1.147598306" May 13 23:47:28.215040 kubelet[2583]: I0513 23:47:28.214967 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.2149316629999998 podStartE2EDuration="2.214931663s" podCreationTimestamp="2025-05-13 23:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:28.199084195 +0000 UTC m=+1.148834426" watchObservedRunningTime="2025-05-13 23:47:28.214931663 +0000 UTC m=+1.164681894" May 13 23:47:31.857729 sudo[1657]: pam_unix(sudo:session): session closed for user root May 13 23:47:31.859513 sshd[1656]: Connection closed by 10.0.0.1 port 60638 May 13 23:47:31.860138 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 13 23:47:31.872413 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:60638.service: Deactivated successfully. May 13 23:47:31.874725 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:47:31.875077 systemd[1]: session-7.scope: Consumed 7.643s CPU time, 231.7M memory peak. May 13 23:47:31.876372 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. May 13 23:47:31.877290 systemd-logind[1443]: Removed session 7. May 13 23:47:33.296198 kubelet[2583]: I0513 23:47:33.294408 2583 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:47:33.296589 containerd[1465]: time="2025-05-13T23:47:33.294735410Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:47:33.297408 kubelet[2583]: I0513 23:47:33.296890 2583 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:47:34.201343 systemd[1]: Created slice kubepods-besteffort-pod58c70ef7_bd05_4ff9_9729_9cfce5d2ef4b.slice - libcontainer container kubepods-besteffort-pod58c70ef7_bd05_4ff9_9729_9cfce5d2ef4b.slice. May 13 23:47:34.282425 kubelet[2583]: I0513 23:47:34.281726 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b-kube-proxy\") pod \"kube-proxy-d4q49\" (UID: \"58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b\") " pod="kube-system/kube-proxy-d4q49" May 13 23:47:34.282425 kubelet[2583]: I0513 23:47:34.281773 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b-xtables-lock\") pod \"kube-proxy-d4q49\" (UID: \"58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b\") " pod="kube-system/kube-proxy-d4q49" May 13 23:47:34.282425 kubelet[2583]: I0513 23:47:34.281795 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkw9m\" (UniqueName: \"kubernetes.io/projected/58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b-kube-api-access-qkw9m\") pod \"kube-proxy-d4q49\" (UID: \"58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b\") " pod="kube-system/kube-proxy-d4q49" May 13 23:47:34.282425 kubelet[2583]: I0513 23:47:34.281815 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b-lib-modules\") pod \"kube-proxy-d4q49\" (UID: \"58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b\") " pod="kube-system/kube-proxy-d4q49" May 13 23:47:34.401471 systemd[1]: Created slice kubepods-besteffort-pod1d1f1174_7e66_4ab5_b517_dcfa03505816.slice - libcontainer container kubepods-besteffort-pod1d1f1174_7e66_4ab5_b517_dcfa03505816.slice. May 13 23:47:34.483930 kubelet[2583]: I0513 23:47:34.483799 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d1f1174-7e66-4ab5-b517-dcfa03505816-var-lib-calico\") pod \"tigera-operator-789496d6f5-cc5nt\" (UID: \"1d1f1174-7e66-4ab5-b517-dcfa03505816\") " pod="tigera-operator/tigera-operator-789496d6f5-cc5nt" May 13 23:47:34.483930 kubelet[2583]: I0513 23:47:34.483839 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxm6\" (UniqueName: \"kubernetes.io/projected/1d1f1174-7e66-4ab5-b517-dcfa03505816-kube-api-access-7dxm6\") pod \"tigera-operator-789496d6f5-cc5nt\" (UID: \"1d1f1174-7e66-4ab5-b517-dcfa03505816\") " pod="tigera-operator/tigera-operator-789496d6f5-cc5nt" May 13 23:47:34.513992 containerd[1465]: time="2025-05-13T23:47:34.513709243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4q49,Uid:58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b,Namespace:kube-system,Attempt:0,}" May 13 23:47:34.530651 containerd[1465]: time="2025-05-13T23:47:34.530271984Z" level=info msg="connecting to shim 4493f2e8170329cd0873cf73a63799517160e7662944d94f9775a38ea0c4fabc" address="unix:///run/containerd/s/f5055fef3a0e409d717cd33a50c3d4388002918900916f28282429ffa528a0ac" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:34.560119 systemd[1]: Started cri-containerd-4493f2e8170329cd0873cf73a63799517160e7662944d94f9775a38ea0c4fabc.scope - libcontainer container 4493f2e8170329cd0873cf73a63799517160e7662944d94f9775a38ea0c4fabc. May 13 23:47:34.590541 containerd[1465]: time="2025-05-13T23:47:34.590499695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4q49,Uid:58c70ef7-bd05-4ff9-9729-9cfce5d2ef4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4493f2e8170329cd0873cf73a63799517160e7662944d94f9775a38ea0c4fabc\"" May 13 23:47:34.600747 containerd[1465]: time="2025-05-13T23:47:34.599911806Z" level=info msg="CreateContainer within sandbox \"4493f2e8170329cd0873cf73a63799517160e7662944d94f9775a38ea0c4fabc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:47:34.621957 containerd[1465]: time="2025-05-13T23:47:34.621914142Z" level=info msg="Container bbc7b9ce1f2105618e05e9b6e6a5d3114058dbb75b328e11d063db8ad60ef0b6: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:34.635389 containerd[1465]: time="2025-05-13T23:47:34.635328776Z" level=info msg="CreateContainer within sandbox \"4493f2e8170329cd0873cf73a63799517160e7662944d94f9775a38ea0c4fabc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bbc7b9ce1f2105618e05e9b6e6a5d3114058dbb75b328e11d063db8ad60ef0b6\"" May 13 23:47:34.637446 containerd[1465]: time="2025-05-13T23:47:34.637409922Z" level=info msg="StartContainer for \"bbc7b9ce1f2105618e05e9b6e6a5d3114058dbb75b328e11d063db8ad60ef0b6\"" May 13 23:47:34.639321 containerd[1465]: time="2025-05-13T23:47:34.639285446Z" level=info msg="connecting to shim bbc7b9ce1f2105618e05e9b6e6a5d3114058dbb75b328e11d063db8ad60ef0b6" address="unix:///run/containerd/s/f5055fef3a0e409d717cd33a50c3d4388002918900916f28282429ffa528a0ac" protocol=ttrpc version=3 May 13 23:47:34.658689 systemd[1]: Started cri-containerd-bbc7b9ce1f2105618e05e9b6e6a5d3114058dbb75b328e11d063db8ad60ef0b6.scope - libcontainer container bbc7b9ce1f2105618e05e9b6e6a5d3114058dbb75b328e11d063db8ad60ef0b6. May 13 23:47:34.703454 containerd[1465]: time="2025-05-13T23:47:34.703394285Z" level=info msg="StartContainer for \"bbc7b9ce1f2105618e05e9b6e6a5d3114058dbb75b328e11d063db8ad60ef0b6\" returns successfully" May 13 23:47:34.707200 containerd[1465]: time="2025-05-13T23:47:34.707133609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-cc5nt,Uid:1d1f1174-7e66-4ab5-b517-dcfa03505816,Namespace:tigera-operator,Attempt:0,}" May 13 23:47:34.721892 containerd[1465]: time="2025-05-13T23:47:34.721849714Z" level=info msg="connecting to shim 2f8b47179b76ca18ffa4eab3d9c17f77a5538d3b04b8fcba8e12faae9b8f1138" address="unix:///run/containerd/s/2606e51fba48bb640b56d65dca40dfe5ffe4cfc10ffd6629358976fcfc64a2ec" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:34.743570 systemd[1]: Started cri-containerd-2f8b47179b76ca18ffa4eab3d9c17f77a5538d3b04b8fcba8e12faae9b8f1138.scope - libcontainer container 2f8b47179b76ca18ffa4eab3d9c17f77a5538d3b04b8fcba8e12faae9b8f1138. May 13 23:47:34.787954 containerd[1465]: time="2025-05-13T23:47:34.787909220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-cc5nt,Uid:1d1f1174-7e66-4ab5-b517-dcfa03505816,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2f8b47179b76ca18ffa4eab3d9c17f77a5538d3b04b8fcba8e12faae9b8f1138\"" May 13 23:47:34.792199 containerd[1465]: time="2025-05-13T23:47:34.790573181Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:47:35.182608 kubelet[2583]: I0513 23:47:35.182440 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d4q49" podStartSLOduration=1.182424484 podStartE2EDuration="1.182424484s" podCreationTimestamp="2025-05-13 23:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:35.182151646 +0000 UTC m=+8.131901877" watchObservedRunningTime="2025-05-13 23:47:35.182424484 +0000 UTC m=+8.132174715" May 13 23:47:36.546537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028314648.mount: Deactivated successfully. May 13 23:47:36.832765 containerd[1465]: time="2025-05-13T23:47:36.832630616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:36.833500 containerd[1465]: time="2025-05-13T23:47:36.833436594Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 23:47:36.834114 containerd[1465]: time="2025-05-13T23:47:36.834089610Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:36.836151 containerd[1465]: time="2025-05-13T23:47:36.836113557Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:36.836824 containerd[1465]: time="2025-05-13T23:47:36.836792020Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.046176866s" May 13 23:47:36.836866 containerd[1465]: time="2025-05-13T23:47:36.836825669Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 23:47:36.842362 containerd[1465]: time="2025-05-13T23:47:36.842326194Z" level=info msg="CreateContainer within sandbox \"2f8b47179b76ca18ffa4eab3d9c17f77a5538d3b04b8fcba8e12faae9b8f1138\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:47:36.848583 containerd[1465]: time="2025-05-13T23:47:36.848529869Z" level=info msg="Container 2cf899f587e95fb4a012494e63c53eca0ac8eaf07415e66810d2316d725c1ec1: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:36.855928 containerd[1465]: time="2025-05-13T23:47:36.855879733Z" level=info msg="CreateContainer within sandbox \"2f8b47179b76ca18ffa4eab3d9c17f77a5538d3b04b8fcba8e12faae9b8f1138\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2cf899f587e95fb4a012494e63c53eca0ac8eaf07415e66810d2316d725c1ec1\"" May 13 23:47:36.857876 containerd[1465]: time="2025-05-13T23:47:36.857844143Z" level=info msg="StartContainer for \"2cf899f587e95fb4a012494e63c53eca0ac8eaf07415e66810d2316d725c1ec1\"" May 13 23:47:36.858866 containerd[1465]: time="2025-05-13T23:47:36.858836091Z" level=info msg="connecting to shim 2cf899f587e95fb4a012494e63c53eca0ac8eaf07415e66810d2316d725c1ec1" address="unix:///run/containerd/s/2606e51fba48bb640b56d65dca40dfe5ffe4cfc10ffd6629358976fcfc64a2ec" protocol=ttrpc version=3 May 13 23:47:36.906606 systemd[1]: Started cri-containerd-2cf899f587e95fb4a012494e63c53eca0ac8eaf07415e66810d2316d725c1ec1.scope - libcontainer container 2cf899f587e95fb4a012494e63c53eca0ac8eaf07415e66810d2316d725c1ec1. May 13 23:47:36.955942 containerd[1465]: time="2025-05-13T23:47:36.955895615Z" level=info msg="StartContainer for \"2cf899f587e95fb4a012494e63c53eca0ac8eaf07415e66810d2316d725c1ec1\" returns successfully" May 13 23:47:38.496931 kubelet[2583]: I0513 23:47:38.495999 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-cc5nt" podStartSLOduration=2.447593587 podStartE2EDuration="4.495980547s" podCreationTimestamp="2025-05-13 23:47:34 +0000 UTC" firstStartedPulling="2025-05-13 23:47:34.790043982 +0000 UTC m=+7.739794213" lastFinishedPulling="2025-05-13 23:47:36.838430942 +0000 UTC m=+9.788181173" observedRunningTime="2025-05-13 23:47:37.206901434 +0000 UTC m=+10.156651745" watchObservedRunningTime="2025-05-13 23:47:38.495980547 +0000 UTC m=+11.445730778" May 13 23:47:40.739988 systemd[1]: Created slice kubepods-besteffort-podd4aeb2af_7eb8_467d_96b2_ba31ba4be6f3.slice - libcontainer container kubepods-besteffort-podd4aeb2af_7eb8_467d_96b2_ba31ba4be6f3.slice. May 13 23:47:40.798326 systemd[1]: Created slice kubepods-besteffort-pod55c188f0_7830_40b0_baee_b6c8acbd03a5.slice - libcontainer container kubepods-besteffort-pod55c188f0_7830_40b0_baee_b6c8acbd03a5.slice. May 13 23:47:40.831149 kubelet[2583]: I0513 23:47:40.831101 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-cni-log-dir\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831149 kubelet[2583]: I0513 23:47:40.831146 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4aeb2af-7eb8-467d-96b2-ba31ba4be6f3-tigera-ca-bundle\") pod \"calico-typha-f9d66c777-rqmqs\" (UID: \"d4aeb2af-7eb8-467d-96b2-ba31ba4be6f3\") " pod="calico-system/calico-typha-f9d66c777-rqmqs" May 13 23:47:40.831581 kubelet[2583]: I0513 23:47:40.831168 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gsr\" (UniqueName: \"kubernetes.io/projected/55c188f0-7830-40b0-baee-b6c8acbd03a5-kube-api-access-d2gsr\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831581 kubelet[2583]: I0513 23:47:40.831186 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-lib-modules\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831581 kubelet[2583]: I0513 23:47:40.831209 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-xtables-lock\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831581 kubelet[2583]: I0513 23:47:40.831226 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-policysync\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831581 kubelet[2583]: I0513 23:47:40.831244 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-var-run-calico\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831873 kubelet[2583]: I0513 23:47:40.831260 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55c188f0-7830-40b0-baee-b6c8acbd03a5-tigera-ca-bundle\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831873 kubelet[2583]: I0513 23:47:40.831277 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/55c188f0-7830-40b0-baee-b6c8acbd03a5-node-certs\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831873 kubelet[2583]: I0513 23:47:40.831292 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-var-lib-calico\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831873 kubelet[2583]: I0513 23:47:40.831307 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-flexvol-driver-host\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.831873 kubelet[2583]: I0513 23:47:40.831323 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d4aeb2af-7eb8-467d-96b2-ba31ba4be6f3-typha-certs\") pod \"calico-typha-f9d66c777-rqmqs\" (UID: \"d4aeb2af-7eb8-467d-96b2-ba31ba4be6f3\") " pod="calico-system/calico-typha-f9d66c777-rqmqs" May 13 23:47:40.832039 kubelet[2583]: I0513 23:47:40.831346 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5t85\" (UniqueName: \"kubernetes.io/projected/d4aeb2af-7eb8-467d-96b2-ba31ba4be6f3-kube-api-access-x5t85\") pod \"calico-typha-f9d66c777-rqmqs\" (UID: \"d4aeb2af-7eb8-467d-96b2-ba31ba4be6f3\") " pod="calico-system/calico-typha-f9d66c777-rqmqs" May 13 23:47:40.832039 kubelet[2583]: I0513 23:47:40.831378 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-cni-bin-dir\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.832039 kubelet[2583]: I0513 23:47:40.831395 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/55c188f0-7830-40b0-baee-b6c8acbd03a5-cni-net-dir\") pod \"calico-node-hhl4m\" (UID: \"55c188f0-7830-40b0-baee-b6c8acbd03a5\") " pod="calico-system/calico-node-hhl4m" May 13 23:47:40.918233 kubelet[2583]: E0513 23:47:40.915896 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btdr4" podUID="1c36bc46-2f0f-4988-88ed-db9b7f4f7206" May 13 23:47:40.939634 kubelet[2583]: E0513 23:47:40.939600 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:40.939634 kubelet[2583]: W0513 23:47:40.939621 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:40.939634 kubelet[2583]: E0513 23:47:40.939646 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:40.945161 kubelet[2583]: E0513 23:47:40.945117 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:40.945161 kubelet[2583]: W0513 23:47:40.945141 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:40.945161 kubelet[2583]: E0513 23:47:40.945160 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:40.960168 kubelet[2583]: E0513 23:47:40.960010 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:40.960168 kubelet[2583]: W0513 23:47:40.960037 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:40.960168 kubelet[2583]: E0513 23:47:40.960057 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:40.964328 kubelet[2583]: E0513 23:47:40.964246 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:40.964328 kubelet[2583]: W0513 23:47:40.964273 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:40.964328 kubelet[2583]: E0513 23:47:40.964292 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.013953 kubelet[2583]: E0513 23:47:41.013752 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.013953 kubelet[2583]: W0513 23:47:41.013774 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.013953 kubelet[2583]: E0513 23:47:41.013795 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.014849 kubelet[2583]: E0513 23:47:41.014754 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.018453 kubelet[2583]: W0513 23:47:41.014773 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.018549 kubelet[2583]: E0513 23:47:41.018461 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.018762 kubelet[2583]: E0513 23:47:41.018750 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.018794 kubelet[2583]: W0513 23:47:41.018762 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.018794 kubelet[2583]: E0513 23:47:41.018773 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.019013 kubelet[2583]: E0513 23:47:41.019001 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.019013 kubelet[2583]: W0513 23:47:41.019012 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.019088 kubelet[2583]: E0513 23:47:41.019022 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.019251 kubelet[2583]: E0513 23:47:41.019227 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.019251 kubelet[2583]: W0513 23:47:41.019247 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.019335 kubelet[2583]: E0513 23:47:41.019256 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.019433 kubelet[2583]: E0513 23:47:41.019418 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.019433 kubelet[2583]: W0513 23:47:41.019427 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.019497 kubelet[2583]: E0513 23:47:41.019435 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.019597 kubelet[2583]: E0513 23:47:41.019587 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.019597 kubelet[2583]: W0513 23:47:41.019597 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.019655 kubelet[2583]: E0513 23:47:41.019606 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.019782 kubelet[2583]: E0513 23:47:41.019771 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.019782 kubelet[2583]: W0513 23:47:41.019782 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.019846 kubelet[2583]: E0513 23:47:41.019794 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.019967 kubelet[2583]: E0513 23:47:41.019948 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.019967 kubelet[2583]: W0513 23:47:41.019963 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.020025 kubelet[2583]: E0513 23:47:41.019972 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.020207 kubelet[2583]: E0513 23:47:41.020195 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.020207 kubelet[2583]: W0513 23:47:41.020206 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.020286 kubelet[2583]: E0513 23:47:41.020214 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.020390 kubelet[2583]: E0513 23:47:41.020380 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.020390 kubelet[2583]: W0513 23:47:41.020390 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.020471 kubelet[2583]: E0513 23:47:41.020413 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.020556 kubelet[2583]: E0513 23:47:41.020547 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.020649 kubelet[2583]: W0513 23:47:41.020636 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.020683 kubelet[2583]: E0513 23:47:41.020653 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.020848 kubelet[2583]: E0513 23:47:41.020836 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.020848 kubelet[2583]: W0513 23:47:41.020847 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.020919 kubelet[2583]: E0513 23:47:41.020855 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.021000 kubelet[2583]: E0513 23:47:41.020990 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.021000 kubelet[2583]: W0513 23:47:41.021000 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.021056 kubelet[2583]: E0513 23:47:41.021007 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.021138 kubelet[2583]: E0513 23:47:41.021130 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.021138 kubelet[2583]: W0513 23:47:41.021138 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.021205 kubelet[2583]: E0513 23:47:41.021146 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.021279 kubelet[2583]: E0513 23:47:41.021269 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.021279 kubelet[2583]: W0513 23:47:41.021278 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.021333 kubelet[2583]: E0513 23:47:41.021287 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.021477 kubelet[2583]: E0513 23:47:41.021467 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.021477 kubelet[2583]: W0513 23:47:41.021477 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.021535 kubelet[2583]: E0513 23:47:41.021485 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.021623 kubelet[2583]: E0513 23:47:41.021614 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.021623 kubelet[2583]: W0513 23:47:41.021624 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.021680 kubelet[2583]: E0513 23:47:41.021631 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.021759 kubelet[2583]: E0513 23:47:41.021750 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.021759 kubelet[2583]: W0513 23:47:41.021759 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.021817 kubelet[2583]: E0513 23:47:41.021766 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.021890 kubelet[2583]: E0513 23:47:41.021881 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.021890 kubelet[2583]: W0513 23:47:41.021890 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.021940 kubelet[2583]: E0513 23:47:41.021897 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.033520 kubelet[2583]: E0513 23:47:41.033420 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.033520 kubelet[2583]: W0513 23:47:41.033442 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.033520 kubelet[2583]: E0513 23:47:41.033459 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.033520 kubelet[2583]: I0513 23:47:41.033488 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1c36bc46-2f0f-4988-88ed-db9b7f4f7206-varrun\") pod \"csi-node-driver-btdr4\" (UID: \"1c36bc46-2f0f-4988-88ed-db9b7f4f7206\") " pod="calico-system/csi-node-driver-btdr4" May 13 23:47:41.033704 kubelet[2583]: E0513 23:47:41.033688 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.033704 kubelet[2583]: W0513 23:47:41.033698 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.033753 kubelet[2583]: E0513 23:47:41.033707 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.033753 kubelet[2583]: I0513 23:47:41.033721 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1c36bc46-2f0f-4988-88ed-db9b7f4f7206-socket-dir\") pod \"csi-node-driver-btdr4\" (UID: \"1c36bc46-2f0f-4988-88ed-db9b7f4f7206\") " pod="calico-system/csi-node-driver-btdr4" May 13 23:47:41.033917 kubelet[2583]: E0513 23:47:41.033906 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.033917 kubelet[2583]: W0513 23:47:41.033916 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.033977 kubelet[2583]: E0513 23:47:41.033929 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.033977 kubelet[2583]: I0513 23:47:41.033943 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9l6t\" (UniqueName: \"kubernetes.io/projected/1c36bc46-2f0f-4988-88ed-db9b7f4f7206-kube-api-access-v9l6t\") pod \"csi-node-driver-btdr4\" (UID: \"1c36bc46-2f0f-4988-88ed-db9b7f4f7206\") " pod="calico-system/csi-node-driver-btdr4" May 13 23:47:41.034139 kubelet[2583]: E0513 23:47:41.034096 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.034139 kubelet[2583]: W0513 23:47:41.034104 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.034139 kubelet[2583]: E0513 23:47:41.034117 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.034139 kubelet[2583]: I0513 23:47:41.034134 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1c36bc46-2f0f-4988-88ed-db9b7f4f7206-registration-dir\") pod \"csi-node-driver-btdr4\" (UID: \"1c36bc46-2f0f-4988-88ed-db9b7f4f7206\") " pod="calico-system/csi-node-driver-btdr4" May 13 23:47:41.034316 kubelet[2583]: E0513 23:47:41.034301 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.034316 kubelet[2583]: W0513 23:47:41.034311 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.034381 kubelet[2583]: E0513 23:47:41.034323 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.034381 kubelet[2583]: I0513 23:47:41.034337 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c36bc46-2f0f-4988-88ed-db9b7f4f7206-kubelet-dir\") pod \"csi-node-driver-btdr4\" (UID: \"1c36bc46-2f0f-4988-88ed-db9b7f4f7206\") " pod="calico-system/csi-node-driver-btdr4" May 13 23:47:41.034561 kubelet[2583]: E0513 23:47:41.034527 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.034561 kubelet[2583]: W0513 23:47:41.034540 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.034561 kubelet[2583]: E0513 23:47:41.034551 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.034722 kubelet[2583]: E0513 23:47:41.034705 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.034722 kubelet[2583]: W0513 23:47:41.034714 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.034722 kubelet[2583]: E0513 23:47:41.034753 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.034878 kubelet[2583]: E0513 23:47:41.034844 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.034878 kubelet[2583]: W0513 23:47:41.034851 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.034917 kubelet[2583]: E0513 23:47:41.034891 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.034992 kubelet[2583]: E0513 23:47:41.034980 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.034992 kubelet[2583]: W0513 23:47:41.034989 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.035047 kubelet[2583]: E0513 23:47:41.035001 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.035141 kubelet[2583]: E0513 23:47:41.035130 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.035141 kubelet[2583]: W0513 23:47:41.035139 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.035194 kubelet[2583]: E0513 23:47:41.035150 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.035312 kubelet[2583]: E0513 23:47:41.035300 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.035312 kubelet[2583]: W0513 23:47:41.035310 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.035370 kubelet[2583]: E0513 23:47:41.035323 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.035478 kubelet[2583]: E0513 23:47:41.035463 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.035478 kubelet[2583]: W0513 23:47:41.035477 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.035530 kubelet[2583]: E0513 23:47:41.035485 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.035639 kubelet[2583]: E0513 23:47:41.035627 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.035639 kubelet[2583]: W0513 23:47:41.035637 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.035699 kubelet[2583]: E0513 23:47:41.035644 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.035801 kubelet[2583]: E0513 23:47:41.035790 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.035801 kubelet[2583]: W0513 23:47:41.035800 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.035876 kubelet[2583]: E0513 23:47:41.035820 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.035967 kubelet[2583]: E0513 23:47:41.035955 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.035967 kubelet[2583]: W0513 23:47:41.035965 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.036018 kubelet[2583]: E0513 23:47:41.035971 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.048196 containerd[1465]: time="2025-05-13T23:47:41.047931829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9d66c777-rqmqs,Uid:d4aeb2af-7eb8-467d-96b2-ba31ba4be6f3,Namespace:calico-system,Attempt:0,}" May 13 23:47:41.090141 update_engine[1444]: I20250513 23:47:41.089445 1444 update_attempter.cc:509] Updating boot flags... May 13 23:47:41.105423 containerd[1465]: time="2025-05-13T23:47:41.103689771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hhl4m,Uid:55c188f0-7830-40b0-baee-b6c8acbd03a5,Namespace:calico-system,Attempt:0,}" May 13 23:47:41.121427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3024) May 13 23:47:41.124094 containerd[1465]: time="2025-05-13T23:47:41.124003124Z" level=info msg="connecting to shim 889917d976d0bd7b1daa13269d587244e5653457932d2a660f045c5e0b240550" address="unix:///run/containerd/s/159742ae98024730ca9d2d2f8eefb4895089675f7234e8865faed1978293f3da" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:41.137172 kubelet[2583]: E0513 23:47:41.137137 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.137172 kubelet[2583]: W0513 23:47:41.137172 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.137347 kubelet[2583]: E0513 23:47:41.137193 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.146189 kubelet[2583]: E0513 23:47:41.144527 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.146189 kubelet[2583]: W0513 23:47:41.144550 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.146189 kubelet[2583]: E0513 23:47:41.144582 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.146548 containerd[1465]: time="2025-05-13T23:47:41.145884165Z" level=info msg="connecting to shim ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209" address="unix:///run/containerd/s/81f599271a6808f65775009831c028936ef6c55219cda9e84cd6e71632bd24e9" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:41.148533 kubelet[2583]: E0513 23:47:41.148508 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.148533 kubelet[2583]: W0513 23:47:41.148530 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.148667 kubelet[2583]: E0513 23:47:41.148597 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.150458 kubelet[2583]: E0513 23:47:41.149724 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.150458 kubelet[2583]: W0513 23:47:41.149746 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.150458 kubelet[2583]: E0513 23:47:41.149863 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.150577 kubelet[2583]: E0513 23:47:41.150509 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.150577 kubelet[2583]: W0513 23:47:41.150524 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.151231 kubelet[2583]: E0513 23:47:41.150967 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.152949 kubelet[2583]: E0513 23:47:41.152918 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.152949 kubelet[2583]: W0513 23:47:41.152938 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.153054 kubelet[2583]: E0513 23:47:41.153000 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.154543 kubelet[2583]: E0513 23:47:41.154497 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.154543 kubelet[2583]: W0513 23:47:41.154517 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.154749 kubelet[2583]: E0513 23:47:41.154724 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.155016 kubelet[2583]: E0513 23:47:41.154878 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.155016 kubelet[2583]: W0513 23:47:41.154894 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.155250 kubelet[2583]: E0513 23:47:41.155233 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.157273 kubelet[2583]: E0513 23:47:41.157242 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.157717 kubelet[2583]: W0513 23:47:41.157279 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.157717 kubelet[2583]: E0513 23:47:41.157423 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.163178 kubelet[2583]: E0513 23:47:41.162556 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.163178 kubelet[2583]: W0513 23:47:41.162575 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.163178 kubelet[2583]: E0513 23:47:41.163091 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.168507 kubelet[2583]: E0513 23:47:41.168481 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.168507 kubelet[2583]: W0513 23:47:41.168502 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.168746 kubelet[2583]: E0513 23:47:41.168662 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.168746 kubelet[2583]: E0513 23:47:41.168738 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.168810 kubelet[2583]: W0513 23:47:41.168749 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.168858 kubelet[2583]: E0513 23:47:41.168842 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.169099 kubelet[2583]: E0513 23:47:41.169015 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.169099 kubelet[2583]: W0513 23:47:41.169028 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.169099 kubelet[2583]: E0513 23:47:41.169069 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.169334 kubelet[2583]: E0513 23:47:41.169317 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.169334 kubelet[2583]: W0513 23:47:41.169330 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.169501 kubelet[2583]: E0513 23:47:41.169429 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.169667 kubelet[2583]: E0513 23:47:41.169644 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.169667 kubelet[2583]: W0513 23:47:41.169658 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.169763 kubelet[2583]: E0513 23:47:41.169746 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.169841 kubelet[2583]: E0513 23:47:41.169828 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.169841 kubelet[2583]: W0513 23:47:41.169839 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.169901 kubelet[2583]: E0513 23:47:41.169853 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.170033 kubelet[2583]: E0513 23:47:41.170015 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.170033 kubelet[2583]: W0513 23:47:41.170030 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.170110 kubelet[2583]: E0513 23:47:41.170043 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.170224 kubelet[2583]: E0513 23:47:41.170207 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.170224 kubelet[2583]: W0513 23:47:41.170222 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.170301 kubelet[2583]: E0513 23:47:41.170233 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.170482 kubelet[2583]: E0513 23:47:41.170463 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.170482 kubelet[2583]: W0513 23:47:41.170479 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.170750 kubelet[2583]: E0513 23:47:41.170495 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.170911 kubelet[2583]: E0513 23:47:41.170819 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.170911 kubelet[2583]: W0513 23:47:41.170834 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.170911 kubelet[2583]: E0513 23:47:41.170855 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.171323 kubelet[2583]: E0513 23:47:41.171268 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.171323 kubelet[2583]: W0513 23:47:41.171284 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.171773 kubelet[2583]: E0513 23:47:41.171622 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.171773 kubelet[2583]: E0513 23:47:41.171718 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.171773 kubelet[2583]: W0513 23:47:41.171727 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.171773 kubelet[2583]: E0513 23:47:41.171737 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.172459 kubelet[2583]: E0513 23:47:41.172301 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.172459 kubelet[2583]: W0513 23:47:41.172316 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.172858 kubelet[2583]: E0513 23:47:41.172677 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.172858 kubelet[2583]: W0513 23:47:41.172705 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.172858 kubelet[2583]: E0513 23:47:41.172718 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.173125 kubelet[2583]: E0513 23:47:41.173083 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.173270 kubelet[2583]: W0513 23:47:41.173222 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.173496 kubelet[2583]: E0513 23:47:41.173449 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.173496 kubelet[2583]: E0513 23:47:41.173457 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.187411 kubelet[2583]: E0513 23:47:41.187362 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:41.187411 kubelet[2583]: W0513 23:47:41.187381 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:41.187411 kubelet[2583]: E0513 23:47:41.187452 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:41.192601 systemd[1]: Started cri-containerd-889917d976d0bd7b1daa13269d587244e5653457932d2a660f045c5e0b240550.scope - libcontainer container 889917d976d0bd7b1daa13269d587244e5653457932d2a660f045c5e0b240550. May 13 23:47:41.209430 systemd[1]: Started cri-containerd-ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209.scope - libcontainer container ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209. May 13 23:47:41.222668 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3023) May 13 23:47:41.295538 containerd[1465]: time="2025-05-13T23:47:41.295418851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9d66c777-rqmqs,Uid:d4aeb2af-7eb8-467d-96b2-ba31ba4be6f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"889917d976d0bd7b1daa13269d587244e5653457932d2a660f045c5e0b240550\"" May 13 23:47:41.301751 containerd[1465]: time="2025-05-13T23:47:41.300656223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:47:41.325585 containerd[1465]: time="2025-05-13T23:47:41.325524606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hhl4m,Uid:55c188f0-7830-40b0-baee-b6c8acbd03a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209\"" May 13 23:47:42.630082 containerd[1465]: time="2025-05-13T23:47:42.630036013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:42.632060 containerd[1465]: time="2025-05-13T23:47:42.632004604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 23:47:42.633046 containerd[1465]: time="2025-05-13T23:47:42.632997441Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:42.635578 containerd[1465]: time="2025-05-13T23:47:42.635113660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:42.635847 containerd[1465]: time="2025-05-13T23:47:42.635827042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.335129251s" May 13 23:47:42.635947 containerd[1465]: time="2025-05-13T23:47:42.635931583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 23:47:42.637419 containerd[1465]: time="2025-05-13T23:47:42.637381670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:47:42.646014 containerd[1465]: time="2025-05-13T23:47:42.645969653Z" level=info msg="CreateContainer within sandbox \"889917d976d0bd7b1daa13269d587244e5653457932d2a660f045c5e0b240550\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:47:42.652357 containerd[1465]: time="2025-05-13T23:47:42.652324594Z" level=info msg="Container b471375ea587692e49c2e4bf54ce20d5d5bd5e0f3c80c78a8bc5745ce834f278: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:42.661438 containerd[1465]: time="2025-05-13T23:47:42.661394512Z" level=info msg="CreateContainer within sandbox \"889917d976d0bd7b1daa13269d587244e5653457932d2a660f045c5e0b240550\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b471375ea587692e49c2e4bf54ce20d5d5bd5e0f3c80c78a8bc5745ce834f278\"" May 13 23:47:42.664062 containerd[1465]: time="2025-05-13T23:47:42.662748501Z" level=info msg="StartContainer for \"b471375ea587692e49c2e4bf54ce20d5d5bd5e0f3c80c78a8bc5745ce834f278\"" May 13 23:47:42.664062 containerd[1465]: time="2025-05-13T23:47:42.663766823Z" level=info msg="connecting to shim b471375ea587692e49c2e4bf54ce20d5d5bd5e0f3c80c78a8bc5745ce834f278" address="unix:///run/containerd/s/159742ae98024730ca9d2d2f8eefb4895089675f7234e8865faed1978293f3da" protocol=ttrpc version=3 May 13 23:47:42.687600 systemd[1]: Started cri-containerd-b471375ea587692e49c2e4bf54ce20d5d5bd5e0f3c80c78a8bc5745ce834f278.scope - libcontainer container b471375ea587692e49c2e4bf54ce20d5d5bd5e0f3c80c78a8bc5745ce834f278. May 13 23:47:42.734827 containerd[1465]: time="2025-05-13T23:47:42.734780946Z" level=info msg="StartContainer for \"b471375ea587692e49c2e4bf54ce20d5d5bd5e0f3c80c78a8bc5745ce834f278\" returns successfully" May 13 23:47:43.136429 kubelet[2583]: E0513 23:47:43.136376 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btdr4" podUID="1c36bc46-2f0f-4988-88ed-db9b7f4f7206" May 13 23:47:43.206970 kubelet[2583]: E0513 23:47:43.206919 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:43.217516 kubelet[2583]: I0513 23:47:43.217455 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f9d66c777-rqmqs" podStartSLOduration=1.8810413449999999 podStartE2EDuration="3.217438491s" podCreationTimestamp="2025-05-13 23:47:40 +0000 UTC" firstStartedPulling="2025-05-13 23:47:41.300267902 +0000 UTC m=+14.250018133" lastFinishedPulling="2025-05-13 23:47:42.636665048 +0000 UTC m=+15.586415279" observedRunningTime="2025-05-13 23:47:43.217057819 +0000 UTC m=+16.166808050" watchObservedRunningTime="2025-05-13 23:47:43.217438491 +0000 UTC m=+16.167188762" May 13 23:47:43.237532 kubelet[2583]: E0513 23:47:43.237492 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.237532 kubelet[2583]: W0513 23:47:43.237519 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.237532 kubelet[2583]: E0513 23:47:43.237540 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.237744 kubelet[2583]: E0513 23:47:43.237712 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.237744 kubelet[2583]: W0513 23:47:43.237720 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.237744 kubelet[2583]: E0513 23:47:43.237729 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.237903 kubelet[2583]: E0513 23:47:43.237879 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.237903 kubelet[2583]: W0513 23:47:43.237891 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.237903 kubelet[2583]: E0513 23:47:43.237902 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.238062 kubelet[2583]: E0513 23:47:43.238045 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.238062 kubelet[2583]: W0513 23:47:43.238056 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.238127 kubelet[2583]: E0513 23:47:43.238063 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.238212 kubelet[2583]: E0513 23:47:43.238202 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.238247 kubelet[2583]: W0513 23:47:43.238211 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.238247 kubelet[2583]: E0513 23:47:43.238219 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.238428 kubelet[2583]: E0513 23:47:43.238370 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.238428 kubelet[2583]: W0513 23:47:43.238381 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.238428 kubelet[2583]: E0513 23:47:43.238414 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.238781 kubelet[2583]: E0513 23:47:43.238579 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.238781 kubelet[2583]: W0513 23:47:43.238590 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.238781 kubelet[2583]: E0513 23:47:43.238608 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.239048 kubelet[2583]: E0513 23:47:43.239033 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.239048 kubelet[2583]: W0513 23:47:43.239045 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.239134 kubelet[2583]: E0513 23:47:43.239054 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.239264 kubelet[2583]: E0513 23:47:43.239253 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.239264 kubelet[2583]: W0513 23:47:43.239264 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.239316 kubelet[2583]: E0513 23:47:43.239272 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.239450 kubelet[2583]: E0513 23:47:43.239440 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.239450 kubelet[2583]: W0513 23:47:43.239449 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.239526 kubelet[2583]: E0513 23:47:43.239456 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.239611 kubelet[2583]: E0513 23:47:43.239599 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.239611 kubelet[2583]: W0513 23:47:43.239609 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.239683 kubelet[2583]: E0513 23:47:43.239616 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.239772 kubelet[2583]: E0513 23:47:43.239761 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.239772 kubelet[2583]: W0513 23:47:43.239770 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.239844 kubelet[2583]: E0513 23:47:43.239778 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.239959 kubelet[2583]: E0513 23:47:43.239948 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.239959 kubelet[2583]: W0513 23:47:43.239957 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.240029 kubelet[2583]: E0513 23:47:43.239964 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.240125 kubelet[2583]: E0513 23:47:43.240099 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.240125 kubelet[2583]: W0513 23:47:43.240108 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.240125 kubelet[2583]: E0513 23:47:43.240117 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.240278 kubelet[2583]: E0513 23:47:43.240266 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.240278 kubelet[2583]: W0513 23:47:43.240276 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.240330 kubelet[2583]: E0513 23:47:43.240284 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.284797 kubelet[2583]: E0513 23:47:43.284767 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.284797 kubelet[2583]: W0513 23:47:43.284789 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.285055 kubelet[2583]: E0513 23:47:43.284808 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.285305 kubelet[2583]: E0513 23:47:43.285103 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.285305 kubelet[2583]: W0513 23:47:43.285115 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.285305 kubelet[2583]: E0513 23:47:43.285130 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.285526 kubelet[2583]: E0513 23:47:43.285332 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.285526 kubelet[2583]: W0513 23:47:43.285341 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.285526 kubelet[2583]: E0513 23:47:43.285357 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.285696 kubelet[2583]: E0513 23:47:43.285593 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.285696 kubelet[2583]: W0513 23:47:43.285611 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.285696 kubelet[2583]: E0513 23:47:43.285629 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.285868 kubelet[2583]: E0513 23:47:43.285855 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.285868 kubelet[2583]: W0513 23:47:43.285867 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.285949 kubelet[2583]: E0513 23:47:43.285881 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.286068 kubelet[2583]: E0513 23:47:43.286056 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.286068 kubelet[2583]: W0513 23:47:43.286067 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.286261 kubelet[2583]: E0513 23:47:43.286082 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.286353 kubelet[2583]: E0513 23:47:43.286337 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.286445 kubelet[2583]: W0513 23:47:43.286431 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.286615 kubelet[2583]: E0513 23:47:43.286577 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.286823 kubelet[2583]: E0513 23:47:43.286800 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.286977 kubelet[2583]: W0513 23:47:43.286843 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.286977 kubelet[2583]: E0513 23:47:43.286861 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.287134 kubelet[2583]: E0513 23:47:43.287120 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.287207 kubelet[2583]: W0513 23:47:43.287194 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.287345 kubelet[2583]: E0513 23:47:43.287268 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.287647 kubelet[2583]: E0513 23:47:43.287573 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.287647 kubelet[2583]: W0513 23:47:43.287588 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.287647 kubelet[2583]: E0513 23:47:43.287603 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.287917 kubelet[2583]: E0513 23:47:43.287772 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.287917 kubelet[2583]: W0513 23:47:43.287781 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.287917 kubelet[2583]: E0513 23:47:43.287801 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.288010 kubelet[2583]: E0513 23:47:43.287969 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.288010 kubelet[2583]: W0513 23:47:43.287977 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.288010 kubelet[2583]: E0513 23:47:43.287989 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.288324 kubelet[2583]: E0513 23:47:43.288255 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.288324 kubelet[2583]: W0513 23:47:43.288287 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.288324 kubelet[2583]: E0513 23:47:43.288311 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.288538 kubelet[2583]: E0513 23:47:43.288508 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.288538 kubelet[2583]: W0513 23:47:43.288520 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.288538 kubelet[2583]: E0513 23:47:43.288534 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.288765 kubelet[2583]: E0513 23:47:43.288749 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.288765 kubelet[2583]: W0513 23:47:43.288763 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.288838 kubelet[2583]: E0513 23:47:43.288778 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.289042 kubelet[2583]: E0513 23:47:43.289026 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.289042 kubelet[2583]: W0513 23:47:43.289040 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.289112 kubelet[2583]: E0513 23:47:43.289055 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.289248 kubelet[2583]: E0513 23:47:43.289237 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.289248 kubelet[2583]: W0513 23:47:43.289247 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.289318 kubelet[2583]: E0513 23:47:43.289259 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.289318 kubelet[2583]: E0513 23:47:43.289529 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:47:43.289318 kubelet[2583]: W0513 23:47:43.289540 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:47:43.289318 kubelet[2583]: E0513 23:47:43.289551 2583 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:47:43.921478 containerd[1465]: time="2025-05-13T23:47:43.921424038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:43.922566 containerd[1465]: time="2025-05-13T23:47:43.922510763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 23:47:43.923627 containerd[1465]: time="2025-05-13T23:47:43.923591208Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:43.926802 containerd[1465]: time="2025-05-13T23:47:43.926725279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:43.927628 containerd[1465]: time="2025-05-13T23:47:43.927583201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.290157843s" May 13 23:47:43.927672 containerd[1465]: time="2025-05-13T23:47:43.927626530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 23:47:43.930921 containerd[1465]: time="2025-05-13T23:47:43.930849938Z" level=info msg="CreateContainer within sandbox \"ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:47:43.951233 containerd[1465]: time="2025-05-13T23:47:43.950900765Z" level=info msg="Container ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:43.964571 containerd[1465]: time="2025-05-13T23:47:43.964513896Z" level=info msg="CreateContainer within sandbox \"ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d\"" May 13 23:47:43.965192 containerd[1465]: time="2025-05-13T23:47:43.965155057Z" level=info msg="StartContainer for \"ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d\"" May 13 23:47:43.966713 containerd[1465]: time="2025-05-13T23:47:43.966668023Z" level=info msg="connecting to shim ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d" address="unix:///run/containerd/s/81f599271a6808f65775009831c028936ef6c55219cda9e84cd6e71632bd24e9" protocol=ttrpc version=3 May 13 23:47:43.990676 systemd[1]: Started cri-containerd-ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d.scope - libcontainer container ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d. May 13 23:47:44.028378 containerd[1465]: time="2025-05-13T23:47:44.028339110Z" level=info msg="StartContainer for \"ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d\" returns successfully" May 13 23:47:44.116975 systemd[1]: cri-containerd-ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d.scope: Deactivated successfully. May 13 23:47:44.145772 containerd[1465]: time="2025-05-13T23:47:44.144796829Z" level=info msg="received exit event container_id:\"ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d\" id:\"ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d\" pid:3246 exited_at:{seconds:1747180064 nanos:140944096}" May 13 23:47:44.145772 containerd[1465]: time="2025-05-13T23:47:44.144904328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d\" id:\"ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d\" pid:3246 exited_at:{seconds:1747180064 nanos:140944096}" May 13 23:47:44.192241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee4cd673bca1197e9ac15dc11587207846380c5a12f629a527a931a10980ea8d-rootfs.mount: Deactivated successfully. May 13 23:47:44.212704 kubelet[2583]: I0513 23:47:44.212672 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:47:44.214539 kubelet[2583]: E0513 23:47:44.213208 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:44.214539 kubelet[2583]: E0513 23:47:44.213429 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:45.136840 kubelet[2583]: E0513 23:47:45.136792 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btdr4" podUID="1c36bc46-2f0f-4988-88ed-db9b7f4f7206" May 13 23:47:45.219494 kubelet[2583]: E0513 23:47:45.219281 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:45.221104 containerd[1465]: time="2025-05-13T23:47:45.220966705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:47:47.138422 kubelet[2583]: E0513 23:47:47.137442 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btdr4" podUID="1c36bc46-2f0f-4988-88ed-db9b7f4f7206" May 13 23:47:49.136688 kubelet[2583]: E0513 23:47:49.136633 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btdr4" podUID="1c36bc46-2f0f-4988-88ed-db9b7f4f7206" May 13 23:47:49.193007 containerd[1465]: time="2025-05-13T23:47:49.192911121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:49.193652 containerd[1465]: time="2025-05-13T23:47:49.193585137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 23:47:49.196535 containerd[1465]: time="2025-05-13T23:47:49.196504355Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:49.199570 containerd[1465]: time="2025-05-13T23:47:49.199505465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:49.200335 containerd[1465]: time="2025-05-13T23:47:49.200216007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.979200654s" May 13 23:47:49.200335 containerd[1465]: time="2025-05-13T23:47:49.200247211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 23:47:49.202658 containerd[1465]: time="2025-05-13T23:47:49.202614630Z" level=info msg="CreateContainer within sandbox \"ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:47:49.213292 containerd[1465]: time="2025-05-13T23:47:49.210659623Z" level=info msg="Container cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:49.230671 containerd[1465]: time="2025-05-13T23:47:49.230480102Z" level=info msg="CreateContainer within sandbox \"ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a\"" May 13 23:47:49.232656 containerd[1465]: time="2025-05-13T23:47:49.232597245Z" level=info msg="StartContainer for \"cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a\"" May 13 23:47:49.237937 containerd[1465]: time="2025-05-13T23:47:49.235385644Z" level=info msg="connecting to shim cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a" address="unix:///run/containerd/s/81f599271a6808f65775009831c028936ef6c55219cda9e84cd6e71632bd24e9" protocol=ttrpc version=3 May 13 23:47:49.262595 systemd[1]: Started cri-containerd-cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a.scope - libcontainer container cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a. May 13 23:47:49.319908 containerd[1465]: time="2025-05-13T23:47:49.319778533Z" level=info msg="StartContainer for \"cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a\" returns successfully" May 13 23:47:49.827390 systemd[1]: cri-containerd-cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a.scope: Deactivated successfully. May 13 23:47:49.828056 systemd[1]: cri-containerd-cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a.scope: Consumed 465ms CPU time, 159.1M memory peak, 4K read from disk, 150.3M written to disk. May 13 23:47:49.841826 containerd[1465]: time="2025-05-13T23:47:49.841782382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a\" id:\"cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a\" pid:3305 exited_at:{seconds:1747180069 nanos:841462537}" May 13 23:47:49.841941 containerd[1465]: time="2025-05-13T23:47:49.841859113Z" level=info msg="received exit event container_id:\"cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a\" id:\"cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a\" pid:3305 exited_at:{seconds:1747180069 nanos:841462537}" May 13 23:47:49.861391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf1073085c8f0e6d9e251d3564dc432210fdc6a1c4d5d35e2548200bb360637a-rootfs.mount: Deactivated successfully. May 13 23:47:49.878855 kubelet[2583]: I0513 23:47:49.878828 2583 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 23:47:49.977357 systemd[1]: Created slice kubepods-besteffort-pod9050bd15_4914_4c5d_a7cd_ec2145176ddd.slice - libcontainer container kubepods-besteffort-pod9050bd15_4914_4c5d_a7cd_ec2145176ddd.slice. May 13 23:47:49.983107 systemd[1]: Created slice kubepods-besteffort-pod8c954ca0_00cd_4be0_b6f5_2d91446dce84.slice - libcontainer container kubepods-besteffort-pod8c954ca0_00cd_4be0_b6f5_2d91446dce84.slice. May 13 23:47:49.987818 systemd[1]: Created slice kubepods-burstable-pod9a8b11b8_36b3_4628_9f7d_95b71824c9d3.slice - libcontainer container kubepods-burstable-pod9a8b11b8_36b3_4628_9f7d_95b71824c9d3.slice. May 13 23:47:49.993339 systemd[1]: Created slice kubepods-burstable-pod3a3185ac_12bd_4cbd_938b_54dfdd3c7349.slice - libcontainer container kubepods-burstable-pod3a3185ac_12bd_4cbd_938b_54dfdd3c7349.slice. May 13 23:47:49.998768 systemd[1]: Created slice kubepods-besteffort-podf77d873a_4070_40a3_838a_0695cd06abf4.slice - libcontainer container kubepods-besteffort-podf77d873a_4070_40a3_838a_0695cd06abf4.slice. May 13 23:47:50.042605 kubelet[2583]: I0513 23:47:50.042560 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a8b11b8-36b3-4628-9f7d-95b71824c9d3-config-volume\") pod \"coredns-668d6bf9bc-zvckz\" (UID: \"9a8b11b8-36b3-4628-9f7d-95b71824c9d3\") " pod="kube-system/coredns-668d6bf9bc-zvckz" May 13 23:47:50.042605 kubelet[2583]: I0513 23:47:50.042605 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgfc6\" (UniqueName: \"kubernetes.io/projected/3a3185ac-12bd-4cbd-938b-54dfdd3c7349-kube-api-access-mgfc6\") pod \"coredns-668d6bf9bc-8cfz4\" (UID: \"3a3185ac-12bd-4cbd-938b-54dfdd3c7349\") " pod="kube-system/coredns-668d6bf9bc-8cfz4" May 13 23:47:50.042824 kubelet[2583]: I0513 23:47:50.042630 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4qp\" (UniqueName: \"kubernetes.io/projected/9050bd15-4914-4c5d-a7cd-ec2145176ddd-kube-api-access-cn4qp\") pod \"calico-apiserver-fd655d978-r6w2s\" (UID: \"9050bd15-4914-4c5d-a7cd-ec2145176ddd\") " pod="calico-apiserver/calico-apiserver-fd655d978-r6w2s" May 13 23:47:50.042824 kubelet[2583]: I0513 23:47:50.042653 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c954ca0-00cd-4be0-b6f5-2d91446dce84-tigera-ca-bundle\") pod \"calico-kube-controllers-764479c758-hpht7\" (UID: \"8c954ca0-00cd-4be0-b6f5-2d91446dce84\") " pod="calico-system/calico-kube-controllers-764479c758-hpht7" May 13 23:47:50.042824 kubelet[2583]: I0513 23:47:50.042677 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f77d873a-4070-40a3-838a-0695cd06abf4-calico-apiserver-certs\") pod \"calico-apiserver-fd655d978-bfhsg\" (UID: \"f77d873a-4070-40a3-838a-0695cd06abf4\") " pod="calico-apiserver/calico-apiserver-fd655d978-bfhsg" May 13 23:47:50.042824 kubelet[2583]: I0513 23:47:50.042698 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4svvw\" (UniqueName: \"kubernetes.io/projected/f77d873a-4070-40a3-838a-0695cd06abf4-kube-api-access-4svvw\") pod \"calico-apiserver-fd655d978-bfhsg\" (UID: \"f77d873a-4070-40a3-838a-0695cd06abf4\") " pod="calico-apiserver/calico-apiserver-fd655d978-bfhsg" May 13 23:47:50.042824 kubelet[2583]: I0513 23:47:50.042718 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9b8\" (UniqueName: \"kubernetes.io/projected/8c954ca0-00cd-4be0-b6f5-2d91446dce84-kube-api-access-8x9b8\") pod \"calico-kube-controllers-764479c758-hpht7\" (UID: \"8c954ca0-00cd-4be0-b6f5-2d91446dce84\") " pod="calico-system/calico-kube-controllers-764479c758-hpht7" May 13 23:47:50.042935 kubelet[2583]: I0513 23:47:50.042737 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq77r\" (UniqueName: \"kubernetes.io/projected/9a8b11b8-36b3-4628-9f7d-95b71824c9d3-kube-api-access-tq77r\") pod \"coredns-668d6bf9bc-zvckz\" (UID: \"9a8b11b8-36b3-4628-9f7d-95b71824c9d3\") " pod="kube-system/coredns-668d6bf9bc-zvckz" May 13 23:47:50.042935 kubelet[2583]: I0513 23:47:50.042754 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a3185ac-12bd-4cbd-938b-54dfdd3c7349-config-volume\") pod \"coredns-668d6bf9bc-8cfz4\" (UID: \"3a3185ac-12bd-4cbd-938b-54dfdd3c7349\") " pod="kube-system/coredns-668d6bf9bc-8cfz4" May 13 23:47:50.042935 kubelet[2583]: I0513 23:47:50.042772 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9050bd15-4914-4c5d-a7cd-ec2145176ddd-calico-apiserver-certs\") pod \"calico-apiserver-fd655d978-r6w2s\" (UID: \"9050bd15-4914-4c5d-a7cd-ec2145176ddd\") " pod="calico-apiserver/calico-apiserver-fd655d978-r6w2s" May 13 23:47:50.247061 kubelet[2583]: E0513 23:47:50.246900 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:50.248975 containerd[1465]: time="2025-05-13T23:47:50.248939764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:47:50.284334 containerd[1465]: time="2025-05-13T23:47:50.284287254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd655d978-r6w2s,Uid:9050bd15-4914-4c5d-a7cd-ec2145176ddd,Namespace:calico-apiserver,Attempt:0,}" May 13 23:47:50.287512 containerd[1465]: time="2025-05-13T23:47:50.287339913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764479c758-hpht7,Uid:8c954ca0-00cd-4be0-b6f5-2d91446dce84,Namespace:calico-system,Attempt:0,}" May 13 23:47:50.291635 kubelet[2583]: E0513 23:47:50.291596 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:50.292190 containerd[1465]: time="2025-05-13T23:47:50.292080523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvckz,Uid:9a8b11b8-36b3-4628-9f7d-95b71824c9d3,Namespace:kube-system,Attempt:0,}" May 13 23:47:50.296222 kubelet[2583]: E0513 23:47:50.296169 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:50.296562 containerd[1465]: time="2025-05-13T23:47:50.296525613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8cfz4,Uid:3a3185ac-12bd-4cbd-938b-54dfdd3c7349,Namespace:kube-system,Attempt:0,}" May 13 23:47:50.302118 containerd[1465]: time="2025-05-13T23:47:50.302047091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd655d978-bfhsg,Uid:f77d873a-4070-40a3-838a-0695cd06abf4,Namespace:calico-apiserver,Attempt:0,}" May 13 23:47:50.768643 containerd[1465]: time="2025-05-13T23:47:50.768499331Z" level=error msg="Failed to destroy network for sandbox \"00f90b49e36c22f05be6339fdf83eb172ca812c90f7725904159dcb42d073f97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.770426 containerd[1465]: time="2025-05-13T23:47:50.770332303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvckz,Uid:9a8b11b8-36b3-4628-9f7d-95b71824c9d3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00f90b49e36c22f05be6339fdf83eb172ca812c90f7725904159dcb42d073f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.776006 kubelet[2583]: E0513 23:47:50.775943 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00f90b49e36c22f05be6339fdf83eb172ca812c90f7725904159dcb42d073f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.776168 containerd[1465]: time="2025-05-13T23:47:50.776112616Z" level=error msg="Failed to destroy network for sandbox \"c8290470bb272f96353db803afc56ce5caf7f2021c2127623e83358f5396111f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.778953 kubelet[2583]: E0513 23:47:50.778872 2583 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00f90b49e36c22f05be6339fdf83eb172ca812c90f7725904159dcb42d073f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zvckz" May 13 23:47:50.778953 kubelet[2583]: E0513 23:47:50.778938 2583 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00f90b49e36c22f05be6339fdf83eb172ca812c90f7725904159dcb42d073f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zvckz" May 13 23:47:50.779061 kubelet[2583]: E0513 23:47:50.779009 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zvckz_kube-system(9a8b11b8-36b3-4628-9f7d-95b71824c9d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zvckz_kube-system(9a8b11b8-36b3-4628-9f7d-95b71824c9d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00f90b49e36c22f05be6339fdf83eb172ca812c90f7725904159dcb42d073f97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zvckz" podUID="9a8b11b8-36b3-4628-9f7d-95b71824c9d3" May 13 23:47:50.780123 containerd[1465]: time="2025-05-13T23:47:50.780079520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8cfz4,Uid:3a3185ac-12bd-4cbd-938b-54dfdd3c7349,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8290470bb272f96353db803afc56ce5caf7f2021c2127623e83358f5396111f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.780471 kubelet[2583]: E0513 23:47:50.780368 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8290470bb272f96353db803afc56ce5caf7f2021c2127623e83358f5396111f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.780471 kubelet[2583]: E0513 23:47:50.780430 2583 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8290470bb272f96353db803afc56ce5caf7f2021c2127623e83358f5396111f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8cfz4" May 13 23:47:50.780471 kubelet[2583]: E0513 23:47:50.780448 2583 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8290470bb272f96353db803afc56ce5caf7f2021c2127623e83358f5396111f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8cfz4" May 13 23:47:50.780606 kubelet[2583]: E0513 23:47:50.780514 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8cfz4_kube-system(3a3185ac-12bd-4cbd-938b-54dfdd3c7349)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8cfz4_kube-system(3a3185ac-12bd-4cbd-938b-54dfdd3c7349)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8290470bb272f96353db803afc56ce5caf7f2021c2127623e83358f5396111f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8cfz4" podUID="3a3185ac-12bd-4cbd-938b-54dfdd3c7349" May 13 23:47:50.784536 containerd[1465]: time="2025-05-13T23:47:50.783998778Z" level=error msg="Failed to destroy network for sandbox \"37d68af6b00def05a138f178ff316d4b22c72cc8a2d7a0107828ae999e656182\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.786588 containerd[1465]: time="2025-05-13T23:47:50.786520284Z" level=error msg="Failed to destroy network for sandbox \"003e29fe52ff422c6d0dcf25fb7ddf8425ade38908d087a4fa59f843f9337a27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.787000 containerd[1465]: time="2025-05-13T23:47:50.786956584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764479c758-hpht7,Uid:8c954ca0-00cd-4be0-b6f5-2d91446dce84,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d68af6b00def05a138f178ff316d4b22c72cc8a2d7a0107828ae999e656182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.787299 kubelet[2583]: E0513 23:47:50.787260 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d68af6b00def05a138f178ff316d4b22c72cc8a2d7a0107828ae999e656182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.787370 kubelet[2583]: E0513 23:47:50.787305 2583 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d68af6b00def05a138f178ff316d4b22c72cc8a2d7a0107828ae999e656182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764479c758-hpht7" May 13 23:47:50.787370 kubelet[2583]: E0513 23:47:50.787331 2583 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d68af6b00def05a138f178ff316d4b22c72cc8a2d7a0107828ae999e656182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764479c758-hpht7" May 13 23:47:50.787696 kubelet[2583]: E0513 23:47:50.787360 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-764479c758-hpht7_calico-system(8c954ca0-00cd-4be0-b6f5-2d91446dce84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-764479c758-hpht7_calico-system(8c954ca0-00cd-4be0-b6f5-2d91446dce84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37d68af6b00def05a138f178ff316d4b22c72cc8a2d7a0107828ae999e656182\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764479c758-hpht7" podUID="8c954ca0-00cd-4be0-b6f5-2d91446dce84" May 13 23:47:50.788045 containerd[1465]: time="2025-05-13T23:47:50.787998847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd655d978-r6w2s,Uid:9050bd15-4914-4c5d-a7cd-ec2145176ddd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"003e29fe52ff422c6d0dcf25fb7ddf8425ade38908d087a4fa59f843f9337a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.788267 kubelet[2583]: E0513 23:47:50.788153 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"003e29fe52ff422c6d0dcf25fb7ddf8425ade38908d087a4fa59f843f9337a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.788267 kubelet[2583]: E0513 23:47:50.788207 2583 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"003e29fe52ff422c6d0dcf25fb7ddf8425ade38908d087a4fa59f843f9337a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fd655d978-r6w2s" May 13 23:47:50.788267 kubelet[2583]: E0513 23:47:50.788223 2583 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"003e29fe52ff422c6d0dcf25fb7ddf8425ade38908d087a4fa59f843f9337a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fd655d978-r6w2s" May 13 23:47:50.788379 kubelet[2583]: E0513 23:47:50.788250 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fd655d978-r6w2s_calico-apiserver(9050bd15-4914-4c5d-a7cd-ec2145176ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fd655d978-r6w2s_calico-apiserver(9050bd15-4914-4c5d-a7cd-ec2145176ddd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"003e29fe52ff422c6d0dcf25fb7ddf8425ade38908d087a4fa59f843f9337a27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fd655d978-r6w2s" podUID="9050bd15-4914-4c5d-a7cd-ec2145176ddd" May 13 23:47:50.789514 containerd[1465]: time="2025-05-13T23:47:50.789481210Z" level=error msg="Failed to destroy network for sandbox \"4f012aa5162b37a64a58a377f9166b0604ec7f9d890a0677bcff2bf1d9deb3da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.790265 containerd[1465]: time="2025-05-13T23:47:50.790229953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd655d978-bfhsg,Uid:f77d873a-4070-40a3-838a-0695cd06abf4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f012aa5162b37a64a58a377f9166b0604ec7f9d890a0677bcff2bf1d9deb3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.790441 kubelet[2583]: E0513 23:47:50.790389 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f012aa5162b37a64a58a377f9166b0604ec7f9d890a0677bcff2bf1d9deb3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:50.790483 kubelet[2583]: E0513 23:47:50.790447 2583 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f012aa5162b37a64a58a377f9166b0604ec7f9d890a0677bcff2bf1d9deb3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fd655d978-bfhsg" May 13 23:47:50.790483 kubelet[2583]: E0513 23:47:50.790464 2583 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f012aa5162b37a64a58a377f9166b0604ec7f9d890a0677bcff2bf1d9deb3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fd655d978-bfhsg" May 13 23:47:50.790542 kubelet[2583]: E0513 23:47:50.790492 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fd655d978-bfhsg_calico-apiserver(f77d873a-4070-40a3-838a-0695cd06abf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fd655d978-bfhsg_calico-apiserver(f77d873a-4070-40a3-838a-0695cd06abf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f012aa5162b37a64a58a377f9166b0604ec7f9d890a0677bcff2bf1d9deb3da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fd655d978-bfhsg" podUID="f77d873a-4070-40a3-838a-0695cd06abf4" May 13 23:47:51.142228 systemd[1]: Created slice kubepods-besteffort-pod1c36bc46_2f0f_4988_88ed_db9b7f4f7206.slice - libcontainer container kubepods-besteffort-pod1c36bc46_2f0f_4988_88ed_db9b7f4f7206.slice. May 13 23:47:51.147162 containerd[1465]: time="2025-05-13T23:47:51.146853502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-btdr4,Uid:1c36bc46-2f0f-4988-88ed-db9b7f4f7206,Namespace:calico-system,Attempt:0,}" May 13 23:47:51.213460 containerd[1465]: time="2025-05-13T23:47:51.213391416Z" level=error msg="Failed to destroy network for sandbox \"a409508e18f2c1d185966d3c2cc68a34422b479c06678602189c637f499cc28e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:51.214036 systemd[1]: run-netns-cni\x2d909e349c\x2d2556\x2d84bb\x2df68c\x2d2f3ba6ca08fe.mount: Deactivated successfully. May 13 23:47:51.215259 systemd[1]: run-netns-cni\x2d2321cee9\x2dc4ce\x2de6b2\x2d8ca8\x2d6af76aedb3da.mount: Deactivated successfully. May 13 23:47:51.215330 systemd[1]: run-netns-cni\x2d19362ea7\x2d01e7\x2d1518\x2dde1f\x2dbac75ff236af.mount: Deactivated successfully. May 13 23:47:51.215378 systemd[1]: run-netns-cni\x2d1090e0f6\x2d6533\x2d8073\x2d9646\x2dd0541d5524e4.mount: Deactivated successfully. May 13 23:47:51.216778 containerd[1465]: time="2025-05-13T23:47:51.216678648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-btdr4,Uid:1c36bc46-2f0f-4988-88ed-db9b7f4f7206,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a409508e18f2c1d185966d3c2cc68a34422b479c06678602189c637f499cc28e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:51.217001 kubelet[2583]: E0513 23:47:51.216908 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a409508e18f2c1d185966d3c2cc68a34422b479c06678602189c637f499cc28e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:47:51.217001 kubelet[2583]: E0513 23:47:51.216961 2583 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a409508e18f2c1d185966d3c2cc68a34422b479c06678602189c637f499cc28e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-btdr4" May 13 23:47:51.217001 kubelet[2583]: E0513 23:47:51.216983 2583 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a409508e18f2c1d185966d3c2cc68a34422b479c06678602189c637f499cc28e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-btdr4" May 13 23:47:51.217089 kubelet[2583]: E0513 23:47:51.217016 2583 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-btdr4_calico-system(1c36bc46-2f0f-4988-88ed-db9b7f4f7206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-btdr4_calico-system(1c36bc46-2f0f-4988-88ed-db9b7f4f7206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a409508e18f2c1d185966d3c2cc68a34422b479c06678602189c637f499cc28e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-btdr4" podUID="1c36bc46-2f0f-4988-88ed-db9b7f4f7206" May 13 23:47:51.219502 systemd[1]: run-netns-cni\x2d737ba5f6\x2da4bb\x2d8b52\x2df16c\x2d124b0837899d.mount: Deactivated successfully. May 13 23:47:53.247611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214087524.mount: Deactivated successfully. May 13 23:47:53.528303 containerd[1465]: time="2025-05-13T23:47:53.528169591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:53.531324 containerd[1465]: time="2025-05-13T23:47:53.531262566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 23:47:53.532223 containerd[1465]: time="2025-05-13T23:47:53.532192839Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:53.534245 containerd[1465]: time="2025-05-13T23:47:53.534208084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:53.534837 containerd[1465]: time="2025-05-13T23:47:53.534804916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.285822387s" May 13 23:47:53.534876 containerd[1465]: time="2025-05-13T23:47:53.534839560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 23:47:53.542535 containerd[1465]: time="2025-05-13T23:47:53.542487168Z" level=info msg="CreateContainer within sandbox \"ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:47:53.560072 containerd[1465]: time="2025-05-13T23:47:53.558274603Z" level=info msg="Container 94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:53.573612 containerd[1465]: time="2025-05-13T23:47:53.573545975Z" level=info msg="CreateContainer within sandbox \"ab94f608c7653f684b4c063de4722fb2770f61ad6716b30ee9175c23358e6209\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf\"" May 13 23:47:53.574215 containerd[1465]: time="2025-05-13T23:47:53.574154849Z" level=info msg="StartContainer for \"94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf\"" May 13 23:47:53.575661 containerd[1465]: time="2025-05-13T23:47:53.575628828Z" level=info msg="connecting to shim 94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf" address="unix:///run/containerd/s/81f599271a6808f65775009831c028936ef6c55219cda9e84cd6e71632bd24e9" protocol=ttrpc version=3 May 13 23:47:53.596593 systemd[1]: Started cri-containerd-94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf.scope - libcontainer container 94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf. May 13 23:47:53.648582 containerd[1465]: time="2025-05-13T23:47:53.648431458Z" level=info msg="StartContainer for \"94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf\" returns successfully" May 13 23:47:53.838379 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:47:53.838704 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:47:54.276482 kubelet[2583]: E0513 23:47:54.276449 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:54.296423 kubelet[2583]: I0513 23:47:54.296306 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hhl4m" podStartSLOduration=2.087619391 podStartE2EDuration="14.296285182s" podCreationTimestamp="2025-05-13 23:47:40 +0000 UTC" firstStartedPulling="2025-05-13 23:47:41.326949343 +0000 UTC m=+14.276699574" lastFinishedPulling="2025-05-13 23:47:53.535615174 +0000 UTC m=+26.485365365" observedRunningTime="2025-05-13 23:47:54.296146326 +0000 UTC m=+27.245896597" watchObservedRunningTime="2025-05-13 23:47:54.296285182 +0000 UTC m=+27.246035413" May 13 23:47:54.399044 containerd[1465]: time="2025-05-13T23:47:54.398924193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf\" id:\"0ef28a5dd7af4aedf233d5a3d72b5a4e39b44361abf20150f2c518a9addc0f56\" pid:3649 exit_status:1 exited_at:{seconds:1747180074 nanos:398608796}" May 13 23:47:55.278664 kubelet[2583]: E0513 23:47:55.278622 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:55.336834 containerd[1465]: time="2025-05-13T23:47:55.336774283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf\" id:\"eec187ac9fa5b1610bf2c09be20f5607f46ebae4351a8b86af52851a88d2b135\" pid:3775 exit_status:1 exited_at:{seconds:1747180075 nanos:336492332}" May 13 23:47:56.386574 kubelet[2583]: I0513 23:47:56.386527 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:47:56.386989 kubelet[2583]: E0513 23:47:56.386861 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:56.812544 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:57170.service - OpenSSH per-connection server daemon (10.0.0.1:57170). May 13 23:47:56.883275 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 57170 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:47:56.890825 sshd-session[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:56.905849 systemd-logind[1443]: New session 8 of user core. May 13 23:47:56.911640 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:47:57.098361 sshd[3817]: Connection closed by 10.0.0.1 port 57170 May 13 23:47:57.098850 sshd-session[3815]: pam_unix(sshd:session): session closed for user core May 13 23:47:57.102753 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:57170.service: Deactivated successfully. May 13 23:47:57.105199 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:47:57.106593 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. May 13 23:47:57.107655 systemd-logind[1443]: Removed session 8. May 13 23:47:57.281637 kubelet[2583]: E0513 23:47:57.281536 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:57.521457 kernel: bpftool[3895]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:47:57.687473 systemd-networkd[1385]: vxlan.calico: Link UP May 13 23:47:57.688002 systemd-networkd[1385]: vxlan.calico: Gained carrier May 13 23:47:59.220671 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL May 13 23:48:02.113000 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:57234.service - OpenSSH per-connection server daemon (10.0.0.1:57234). May 13 23:48:02.137309 containerd[1465]: time="2025-05-13T23:48:02.137269434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764479c758-hpht7,Uid:8c954ca0-00cd-4be0-b6f5-2d91446dce84,Namespace:calico-system,Attempt:0,}" May 13 23:48:02.189881 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 57234 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:02.192737 sshd-session[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:02.201807 systemd-logind[1443]: New session 9 of user core. May 13 23:48:02.212588 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:48:02.434435 sshd[3988]: Connection closed by 10.0.0.1 port 57234 May 13 23:48:02.434461 sshd-session[3973]: pam_unix(sshd:session): session closed for user core May 13 23:48:02.439158 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:57234.service: Deactivated successfully. May 13 23:48:02.441038 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:48:02.442949 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. May 13 23:48:02.443995 systemd-logind[1443]: Removed session 9. May 13 23:48:02.539846 systemd-networkd[1385]: cali9eaac0092d4: Link UP May 13 23:48:02.540894 systemd-networkd[1385]: cali9eaac0092d4: Gained carrier May 13 23:48:02.560469 containerd[1465]: 2025-05-13 23:48:02.223 [INFO][3974] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0 calico-kube-controllers-764479c758- calico-system 8c954ca0-00cd-4be0-b6f5-2d91446dce84 662 0 2025-05-13 23:47:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:764479c758 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-764479c758-hpht7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9eaac0092d4 [] []}} ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Namespace="calico-system" Pod="calico-kube-controllers-764479c758-hpht7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764479c758--hpht7-" May 13 23:48:02.560469 containerd[1465]: 2025-05-13 23:48:02.224 [INFO][3974] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Namespace="calico-system" Pod="calico-kube-controllers-764479c758-hpht7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" May 13 23:48:02.560469 containerd[1465]: 2025-05-13 23:48:02.386 [INFO][3991] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" HandleID="k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Workload="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.409 [INFO][3991] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" HandleID="k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Workload="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c0420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-764479c758-hpht7", "timestamp":"2025-05-13 23:48:02.38603848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.409 [INFO][3991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.409 [INFO][3991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.409 [INFO][3991] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.414 [INFO][3991] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" host="localhost" May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.506 [INFO][3991] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.512 [INFO][3991] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.515 [INFO][3991] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.517 [INFO][3991] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:48:02.561134 containerd[1465]: 2025-05-13 23:48:02.517 [INFO][3991] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" host="localhost" May 13 23:48:02.561548 containerd[1465]: 2025-05-13 23:48:02.520 [INFO][3991] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07 May 13 23:48:02.561548 containerd[1465]: 2025-05-13 23:48:02.524 [INFO][3991] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" host="localhost" May 13 23:48:02.561548 containerd[1465]: 2025-05-13 23:48:02.532 [INFO][3991] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" host="localhost" May 13 23:48:02.561548 containerd[1465]: 2025-05-13 23:48:02.532 [INFO][3991] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" host="localhost" May 13 23:48:02.561548 containerd[1465]: 2025-05-13 23:48:02.532 [INFO][3991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:48:02.561548 containerd[1465]: 2025-05-13 23:48:02.532 [INFO][3991] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" HandleID="k8s-pod-network.2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Workload="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" May 13 23:48:02.562278 containerd[1465]: 2025-05-13 23:48:02.535 [INFO][3974] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Namespace="calico-system" Pod="calico-kube-controllers-764479c758-hpht7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0", GenerateName:"calico-kube-controllers-764479c758-", Namespace:"calico-system", SelfLink:"", UID:"8c954ca0-00cd-4be0-b6f5-2d91446dce84", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764479c758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-764479c758-hpht7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9eaac0092d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:02.564559 containerd[1465]: 2025-05-13 23:48:02.535 [INFO][3974] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Namespace="calico-system" Pod="calico-kube-controllers-764479c758-hpht7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" May 13 23:48:02.564559 containerd[1465]: 2025-05-13 23:48:02.535 [INFO][3974] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9eaac0092d4 ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Namespace="calico-system" Pod="calico-kube-controllers-764479c758-hpht7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" May 13 23:48:02.564559 containerd[1465]: 2025-05-13 23:48:02.541 [INFO][3974] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Namespace="calico-system" Pod="calico-kube-controllers-764479c758-hpht7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" May 13 23:48:02.564728 containerd[1465]: 2025-05-13 23:48:02.542 [INFO][3974] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Namespace="calico-system" Pod="calico-kube-controllers-764479c758-hpht7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0", GenerateName:"calico-kube-controllers-764479c758-", Namespace:"calico-system", SelfLink:"", UID:"8c954ca0-00cd-4be0-b6f5-2d91446dce84", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764479c758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07", Pod:"calico-kube-controllers-764479c758-hpht7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9eaac0092d4", MAC:"ce:8d:11:bc:64:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:02.564804 containerd[1465]: 2025-05-13 23:48:02.557 [INFO][3974] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" Namespace="calico-system" Pod="calico-kube-controllers-764479c758-hpht7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764479c758--hpht7-eth0" May 13 23:48:02.710469 containerd[1465]: time="2025-05-13T23:48:02.710202475Z" level=info msg="connecting to shim 2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07" address="unix:///run/containerd/s/7146c19c36c78e319392ebc329ed7374157ccca3309f8ca8a9438180af93e61a" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:02.733612 systemd[1]: Started cri-containerd-2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07.scope - libcontainer container 2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07. May 13 23:48:02.748875 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:02.769022 containerd[1465]: time="2025-05-13T23:48:02.768934751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764479c758-hpht7,Uid:8c954ca0-00cd-4be0-b6f5-2d91446dce84,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07\"" May 13 23:48:02.771089 containerd[1465]: time="2025-05-13T23:48:02.771040137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 23:48:03.137358 kubelet[2583]: E0513 23:48:03.136570 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:03.137898 containerd[1465]: time="2025-05-13T23:48:03.136665607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-btdr4,Uid:1c36bc46-2f0f-4988-88ed-db9b7f4f7206,Namespace:calico-system,Attempt:0,}" May 13 23:48:03.137898 containerd[1465]: time="2025-05-13T23:48:03.137137328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd655d978-bfhsg,Uid:f77d873a-4070-40a3-838a-0695cd06abf4,Namespace:calico-apiserver,Attempt:0,}" May 13 23:48:03.137898 containerd[1465]: time="2025-05-13T23:48:03.137319903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvckz,Uid:9a8b11b8-36b3-4628-9f7d-95b71824c9d3,Namespace:kube-system,Attempt:0,}" May 13 23:48:03.434250 systemd-networkd[1385]: cali31aba53f9b2: Link UP May 13 23:48:03.434491 systemd-networkd[1385]: cali31aba53f9b2: Gained carrier May 13 23:48:03.469102 containerd[1465]: 2025-05-13 23:48:03.211 [INFO][4098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--zvckz-eth0 coredns-668d6bf9bc- kube-system 9a8b11b8-36b3-4628-9f7d-95b71824c9d3 665 0 2025-05-13 23:47:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-zvckz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali31aba53f9b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-zvckz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zvckz-" May 13 23:48:03.469102 containerd[1465]: 2025-05-13 23:48:03.211 [INFO][4098] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-zvckz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" May 13 23:48:03.469102 containerd[1465]: 2025-05-13 23:48:03.249 [INFO][4130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" HandleID="k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Workload="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.369 [INFO][4130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" HandleID="k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Workload="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2e60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-zvckz", "timestamp":"2025-05-13 23:48:03.249525857 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.369 [INFO][4130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.369 [INFO][4130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.369 [INFO][4130] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.372 [INFO][4130] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" host="localhost" May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.379 [INFO][4130] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.385 [INFO][4130] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.388 [INFO][4130] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.392 [INFO][4130] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:48:03.469340 containerd[1465]: 2025-05-13 23:48:03.392 [INFO][4130] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" host="localhost" May 13 23:48:03.469573 containerd[1465]: 2025-05-13 23:48:03.395 [INFO][4130] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5 May 13 23:48:03.469573 containerd[1465]: 2025-05-13 23:48:03.403 [INFO][4130] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" host="localhost" May 13 23:48:03.469573 containerd[1465]: 2025-05-13 23:48:03.428 [INFO][4130] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" host="localhost" May 13 23:48:03.469573 containerd[1465]: 2025-05-13 23:48:03.428 [INFO][4130] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" host="localhost" May 13 23:48:03.469573 containerd[1465]: 2025-05-13 23:48:03.428 [INFO][4130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:48:03.469573 containerd[1465]: 2025-05-13 23:48:03.428 [INFO][4130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" HandleID="k8s-pod-network.562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Workload="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" May 13 23:48:03.469689 containerd[1465]: 2025-05-13 23:48:03.430 [INFO][4098] cni-plugin/k8s.go 386: Populated endpoint ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-zvckz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zvckz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9a8b11b8-36b3-4628-9f7d-95b71824c9d3", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-zvckz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31aba53f9b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:03.469742 containerd[1465]: 2025-05-13 23:48:03.430 [INFO][4098] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-zvckz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" May 13 23:48:03.469742 containerd[1465]: 2025-05-13 23:48:03.430 [INFO][4098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31aba53f9b2 ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-zvckz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" May 13 23:48:03.469742 containerd[1465]: 2025-05-13 23:48:03.434 [INFO][4098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-zvckz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" May 13 23:48:03.469803 containerd[1465]: 2025-05-13 23:48:03.435 [INFO][4098] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-zvckz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zvckz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9a8b11b8-36b3-4628-9f7d-95b71824c9d3", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5", Pod:"coredns-668d6bf9bc-zvckz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31aba53f9b2", MAC:"ca:d2:74:f3:f2:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:03.469803 containerd[1465]: 2025-05-13 23:48:03.465 [INFO][4098] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-zvckz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zvckz-eth0" May 13 23:48:03.510311 containerd[1465]: time="2025-05-13T23:48:03.510237880Z" level=info msg="connecting to shim 562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5" address="unix:///run/containerd/s/5c6c29082456e6879cb57c242ac6f621711c0f5fa15179c9a51b60029f7af706" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:03.535733 systemd-networkd[1385]: cali082b924f7ba: Link UP May 13 23:48:03.536034 systemd-networkd[1385]: cali082b924f7ba: Gained carrier May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.199 [INFO][4078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--btdr4-eth0 csi-node-driver- calico-system 1c36bc46-2f0f-4988-88ed-db9b7f4f7206 581 0 2025-05-13 23:47:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-btdr4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali082b924f7ba [] []}} ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Namespace="calico-system" Pod="csi-node-driver-btdr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--btdr4-" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.199 [INFO][4078] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Namespace="calico-system" Pod="csi-node-driver-btdr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--btdr4-eth0" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.266 [INFO][4124] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" HandleID="k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Workload="localhost-k8s-csi--node--driver--btdr4-eth0" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.371 [INFO][4124] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" HandleID="k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Workload="localhost-k8s-csi--node--driver--btdr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-btdr4", "timestamp":"2025-05-13 23:48:03.266282295 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.372 [INFO][4124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.428 [INFO][4124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.428 [INFO][4124] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.475 [INFO][4124] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.487 [INFO][4124] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.505 [INFO][4124] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.509 [INFO][4124] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.512 [INFO][4124] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.512 [INFO][4124] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.514 [INFO][4124] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1 May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.521 [INFO][4124] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.529 [INFO][4124] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.529 [INFO][4124] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" host="localhost" May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.529 [INFO][4124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:48:03.557359 containerd[1465]: 2025-05-13 23:48:03.529 [INFO][4124] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" HandleID="k8s-pod-network.14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Workload="localhost-k8s-csi--node--driver--btdr4-eth0" May 13 23:48:03.558307 containerd[1465]: 2025-05-13 23:48:03.532 [INFO][4078] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Namespace="calico-system" Pod="csi-node-driver-btdr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--btdr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--btdr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c36bc46-2f0f-4988-88ed-db9b7f4f7206", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-btdr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali082b924f7ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:03.558307 containerd[1465]: 2025-05-13 23:48:03.533 [INFO][4078] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Namespace="calico-system" Pod="csi-node-driver-btdr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--btdr4-eth0" May 13 23:48:03.558307 containerd[1465]: 2025-05-13 23:48:03.533 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali082b924f7ba ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Namespace="calico-system" Pod="csi-node-driver-btdr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--btdr4-eth0" May 13 23:48:03.558307 containerd[1465]: 2025-05-13 23:48:03.535 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Namespace="calico-system" Pod="csi-node-driver-btdr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--btdr4-eth0" May 13 23:48:03.558307 containerd[1465]: 2025-05-13 23:48:03.538 [INFO][4078] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Namespace="calico-system" Pod="csi-node-driver-btdr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--btdr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--btdr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c36bc46-2f0f-4988-88ed-db9b7f4f7206", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1", Pod:"csi-node-driver-btdr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali082b924f7ba", MAC:"b2:62:81:c8:07:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:03.558307 containerd[1465]: 2025-05-13 23:48:03.551 [INFO][4078] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" Namespace="calico-system" Pod="csi-node-driver-btdr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--btdr4-eth0" May 13 23:48:03.574593 systemd[1]: Started cri-containerd-562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5.scope - libcontainer container 562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5. May 13 23:48:03.605255 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:03.630028 containerd[1465]: time="2025-05-13T23:48:03.629943757Z" level=info msg="connecting to shim 14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1" address="unix:///run/containerd/s/680c5853d00fafcb119eea80c05f6987036ebe42ee3ce69e4e4448d093c94a5a" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:03.645008 systemd-networkd[1385]: cali671da6d26c7: Link UP May 13 23:48:03.645778 systemd-networkd[1385]: cali671da6d26c7: Gained carrier May 13 23:48:03.664215 containerd[1465]: time="2025-05-13T23:48:03.664155494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvckz,Uid:9a8b11b8-36b3-4628-9f7d-95b71824c9d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5\"" May 13 23:48:03.665777 kubelet[2583]: E0513 23:48:03.665749 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.211 [INFO][4081] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0 calico-apiserver-fd655d978- calico-apiserver f77d873a-4070-40a3-838a-0695cd06abf4 667 0 2025-05-13 23:47:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fd655d978 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fd655d978-bfhsg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali671da6d26c7 [] []}} ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-bfhsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--bfhsg-" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.211 [INFO][4081] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-bfhsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.258 [INFO][4123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" HandleID="k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Workload="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.371 [INFO][4123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" HandleID="k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Workload="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000382690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fd655d978-bfhsg", "timestamp":"2025-05-13 23:48:03.258169359 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.372 [INFO][4123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.529 [INFO][4123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.530 [INFO][4123] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.581 [INFO][4123] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.589 [INFO][4123] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.600 [INFO][4123] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.606 [INFO][4123] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.610 [INFO][4123] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.610 [INFO][4123] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.613 [INFO][4123] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250 May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.622 [INFO][4123] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.630 [INFO][4123] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.630 [INFO][4123] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" host="localhost" May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.630 [INFO][4123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:48:03.667687 containerd[1465]: 2025-05-13 23:48:03.630 [INFO][4123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" HandleID="k8s-pod-network.10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Workload="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" May 13 23:48:03.669323 containerd[1465]: 2025-05-13 23:48:03.638 [INFO][4081] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-bfhsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0", GenerateName:"calico-apiserver-fd655d978-", Namespace:"calico-apiserver", SelfLink:"", UID:"f77d873a-4070-40a3-838a-0695cd06abf4", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd655d978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fd655d978-bfhsg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali671da6d26c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:03.669323 containerd[1465]: 2025-05-13 23:48:03.639 [INFO][4081] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-bfhsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" May 13 23:48:03.669323 containerd[1465]: 2025-05-13 23:48:03.639 [INFO][4081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali671da6d26c7 ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-bfhsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" May 13 23:48:03.669323 containerd[1465]: 2025-05-13 23:48:03.644 [INFO][4081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-bfhsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" May 13 23:48:03.669323 containerd[1465]: 2025-05-13 23:48:03.646 [INFO][4081] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-bfhsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0", GenerateName:"calico-apiserver-fd655d978-", Namespace:"calico-apiserver", SelfLink:"", UID:"f77d873a-4070-40a3-838a-0695cd06abf4", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd655d978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250", Pod:"calico-apiserver-fd655d978-bfhsg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali671da6d26c7", MAC:"82:07:64:57:0e:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:03.669323 containerd[1465]: 2025-05-13 23:48:03.660 [INFO][4081] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-bfhsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--bfhsg-eth0" May 13 23:48:03.671583 containerd[1465]: time="2025-05-13T23:48:03.671535968Z" level=info msg="CreateContainer within sandbox \"562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:48:03.689611 systemd[1]: Started cri-containerd-14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1.scope - libcontainer container 14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1. May 13 23:48:03.704323 containerd[1465]: time="2025-05-13T23:48:03.703610801Z" level=info msg="Container a182a251d2a55abca85bd011c7964e041cb4e397df30fa5808f96c87452e48d9: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:03.706191 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:03.771577 containerd[1465]: time="2025-05-13T23:48:03.771288652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-btdr4,Uid:1c36bc46-2f0f-4988-88ed-db9b7f4f7206,Namespace:calico-system,Attempt:0,} returns sandbox id \"14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1\"" May 13 23:48:03.771697 containerd[1465]: time="2025-05-13T23:48:03.771583197Z" level=info msg="CreateContainer within sandbox \"562f7964532390799a0156ffb63284eeb07f6915b6fdc4b86e5185ba2c2cbed5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a182a251d2a55abca85bd011c7964e041cb4e397df30fa5808f96c87452e48d9\"" May 13 23:48:03.775240 containerd[1465]: time="2025-05-13T23:48:03.773441916Z" level=info msg="StartContainer for \"a182a251d2a55abca85bd011c7964e041cb4e397df30fa5808f96c87452e48d9\"" May 13 23:48:03.775240 containerd[1465]: time="2025-05-13T23:48:03.774256746Z" level=info msg="connecting to shim a182a251d2a55abca85bd011c7964e041cb4e397df30fa5808f96c87452e48d9" address="unix:///run/containerd/s/5c6c29082456e6879cb57c242ac6f621711c0f5fa15179c9a51b60029f7af706" protocol=ttrpc version=3 May 13 23:48:03.783192 containerd[1465]: time="2025-05-13T23:48:03.783131268Z" level=info msg="connecting to shim 10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250" address="unix:///run/containerd/s/90d4fc81e085ab121854aa561ad2bd3277fa773a393f76ce1a018c7d74476b70" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:03.800771 systemd[1]: Started cri-containerd-a182a251d2a55abca85bd011c7964e041cb4e397df30fa5808f96c87452e48d9.scope - libcontainer container a182a251d2a55abca85bd011c7964e041cb4e397df30fa5808f96c87452e48d9. May 13 23:48:03.827123 systemd[1]: Started cri-containerd-10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250.scope - libcontainer container 10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250. May 13 23:48:03.829540 systemd-networkd[1385]: cali9eaac0092d4: Gained IPv6LL May 13 23:48:03.847862 containerd[1465]: time="2025-05-13T23:48:03.847799820Z" level=info msg="StartContainer for \"a182a251d2a55abca85bd011c7964e041cb4e397df30fa5808f96c87452e48d9\" returns successfully" May 13 23:48:03.856990 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:03.900558 containerd[1465]: time="2025-05-13T23:48:03.900293327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd655d978-bfhsg,Uid:f77d873a-4070-40a3-838a-0695cd06abf4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250\"" May 13 23:48:04.136753 kubelet[2583]: E0513 23:48:04.136656 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:04.137601 containerd[1465]: time="2025-05-13T23:48:04.137557084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8cfz4,Uid:3a3185ac-12bd-4cbd-938b-54dfdd3c7349,Namespace:kube-system,Attempt:0,}" May 13 23:48:04.138380 containerd[1465]: time="2025-05-13T23:48:04.137876431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd655d978-r6w2s,Uid:9050bd15-4914-4c5d-a7cd-ec2145176ddd,Namespace:calico-apiserver,Attempt:0,}" May 13 23:48:04.319089 kubelet[2583]: E0513 23:48:04.318979 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:04.322653 containerd[1465]: time="2025-05-13T23:48:04.322593678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:04.322938 containerd[1465]: time="2025-05-13T23:48:04.322735850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 23:48:04.324116 containerd[1465]: time="2025-05-13T23:48:04.324072442Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:04.326230 containerd[1465]: time="2025-05-13T23:48:04.326162176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:04.330024 containerd[1465]: time="2025-05-13T23:48:04.329906888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.558824228s" May 13 23:48:04.330024 containerd[1465]: time="2025-05-13T23:48:04.329970854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 23:48:04.330316 kubelet[2583]: I0513 23:48:04.330239 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zvckz" podStartSLOduration=30.330219714 podStartE2EDuration="30.330219714s" podCreationTimestamp="2025-05-13 23:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:04.329362803 +0000 UTC m=+37.279113034" watchObservedRunningTime="2025-05-13 23:48:04.330219714 +0000 UTC m=+37.279969905" May 13 23:48:04.332695 containerd[1465]: time="2025-05-13T23:48:04.332294648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:48:04.339103 systemd-networkd[1385]: cali03518892b33: Link UP May 13 23:48:04.339655 systemd-networkd[1385]: cali03518892b33: Gained carrier May 13 23:48:04.357232 containerd[1465]: time="2025-05-13T23:48:04.357009709Z" level=info msg="CreateContainer within sandbox \"2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.213 [INFO][4362] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0 coredns-668d6bf9bc- kube-system 3a3185ac-12bd-4cbd-938b-54dfdd3c7349 666 0 2025-05-13 23:47:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8cfz4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali03518892b33 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Namespace="kube-system" Pod="coredns-668d6bf9bc-8cfz4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8cfz4-" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.213 [INFO][4362] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Namespace="kube-system" Pod="coredns-668d6bf9bc-8cfz4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.272 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" HandleID="k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Workload="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.284 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" HandleID="k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Workload="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f5700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8cfz4", "timestamp":"2025-05-13 23:48:04.271955255 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.284 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.284 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.285 [INFO][4394] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.287 [INFO][4394] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.293 [INFO][4394] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.298 [INFO][4394] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.302 [INFO][4394] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.306 [INFO][4394] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.307 [INFO][4394] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.310 [INFO][4394] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09 May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.315 [INFO][4394] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.327 [INFO][4394] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.328 [INFO][4394] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" host="localhost" May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.329 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:48:04.361862 containerd[1465]: 2025-05-13 23:48:04.329 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" HandleID="k8s-pod-network.c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Workload="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" May 13 23:48:04.362644 containerd[1465]: 2025-05-13 23:48:04.334 [INFO][4362] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Namespace="kube-system" Pod="coredns-668d6bf9bc-8cfz4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3a3185ac-12bd-4cbd-938b-54dfdd3c7349", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8cfz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03518892b33", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:04.362644 containerd[1465]: 2025-05-13 23:48:04.334 [INFO][4362] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Namespace="kube-system" Pod="coredns-668d6bf9bc-8cfz4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" May 13 23:48:04.362644 containerd[1465]: 2025-05-13 23:48:04.334 [INFO][4362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03518892b33 ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Namespace="kube-system" Pod="coredns-668d6bf9bc-8cfz4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" May 13 23:48:04.362644 containerd[1465]: 2025-05-13 23:48:04.339 [INFO][4362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Namespace="kube-system" Pod="coredns-668d6bf9bc-8cfz4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" May 13 23:48:04.362644 containerd[1465]: 2025-05-13 23:48:04.340 [INFO][4362] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Namespace="kube-system" Pod="coredns-668d6bf9bc-8cfz4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3a3185ac-12bd-4cbd-938b-54dfdd3c7349", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09", Pod:"coredns-668d6bf9bc-8cfz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03518892b33", MAC:"96:bc:8a:b6:94:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:04.362644 containerd[1465]: 2025-05-13 23:48:04.353 [INFO][4362] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" Namespace="kube-system" Pod="coredns-668d6bf9bc-8cfz4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8cfz4-eth0" May 13 23:48:04.369051 containerd[1465]: time="2025-05-13T23:48:04.369001589Z" level=info msg="Container c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:04.383415 containerd[1465]: time="2025-05-13T23:48:04.383351306Z" level=info msg="CreateContainer within sandbox \"2bfdd580aab0f98875acff98e5f811501de25cc5773a215df584c9dc78e07f07\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761\"" May 13 23:48:04.383987 containerd[1465]: time="2025-05-13T23:48:04.383961717Z" level=info msg="StartContainer for \"c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761\"" May 13 23:48:04.385696 containerd[1465]: time="2025-05-13T23:48:04.385664859Z" level=info msg="connecting to shim c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761" address="unix:///run/containerd/s/7146c19c36c78e319392ebc329ed7374157ccca3309f8ca8a9438180af93e61a" protocol=ttrpc version=3 May 13 23:48:04.411322 containerd[1465]: time="2025-05-13T23:48:04.411054777Z" level=info msg="connecting to shim c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09" address="unix:///run/containerd/s/ae41e5cf35827a85f0d7f0427c9da9c493f9ae40e6f978cfd685cf7f0531b25b" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:04.415594 systemd[1]: Started cri-containerd-c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761.scope - libcontainer container c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761. May 13 23:48:04.441968 systemd[1]: Started cri-containerd-c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09.scope - libcontainer container c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09. May 13 23:48:04.443982 systemd-networkd[1385]: cali9693f9c6023: Link UP May 13 23:48:04.453151 systemd-networkd[1385]: cali9693f9c6023: Gained carrier May 13 23:48:04.464609 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.215 [INFO][4372] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0 calico-apiserver-fd655d978- calico-apiserver 9050bd15-4914-4c5d-a7cd-ec2145176ddd 664 0 2025-05-13 23:47:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fd655d978 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fd655d978-r6w2s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9693f9c6023 [] []}} ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-r6w2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--r6w2s-" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.215 [INFO][4372] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-r6w2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.283 [INFO][4392] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" HandleID="k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Workload="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.301 [INFO][4392] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" HandleID="k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Workload="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003047d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fd655d978-r6w2s", "timestamp":"2025-05-13 23:48:04.283722036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.301 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.329 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.330 [INFO][4392] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.388 [INFO][4392] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.399 [INFO][4392] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.408 [INFO][4392] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.410 [INFO][4392] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.414 [INFO][4392] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.414 [INFO][4392] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.418 [INFO][4392] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5 May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.425 [INFO][4392] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.433 [INFO][4392] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.433 [INFO][4392] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" host="localhost" May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.434 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:48:04.470713 containerd[1465]: 2025-05-13 23:48:04.434 [INFO][4392] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" HandleID="k8s-pod-network.de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Workload="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" May 13 23:48:04.471923 containerd[1465]: 2025-05-13 23:48:04.439 [INFO][4372] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-r6w2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0", GenerateName:"calico-apiserver-fd655d978-", Namespace:"calico-apiserver", SelfLink:"", UID:"9050bd15-4914-4c5d-a7cd-ec2145176ddd", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd655d978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fd655d978-r6w2s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9693f9c6023", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:04.471923 containerd[1465]: 2025-05-13 23:48:04.440 [INFO][4372] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-r6w2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" May 13 23:48:04.471923 containerd[1465]: 2025-05-13 23:48:04.440 [INFO][4372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9693f9c6023 ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-r6w2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" May 13 23:48:04.471923 containerd[1465]: 2025-05-13 23:48:04.453 [INFO][4372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-r6w2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" May 13 23:48:04.471923 containerd[1465]: 2025-05-13 23:48:04.453 [INFO][4372] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-r6w2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0", GenerateName:"calico-apiserver-fd655d978-", Namespace:"calico-apiserver", SelfLink:"", UID:"9050bd15-4914-4c5d-a7cd-ec2145176ddd", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fd655d978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5", Pod:"calico-apiserver-fd655d978-r6w2s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9693f9c6023", MAC:"a2:04:f0:be:f8:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:48:04.471923 containerd[1465]: 2025-05-13 23:48:04.465 [INFO][4372] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" Namespace="calico-apiserver" Pod="calico-apiserver-fd655d978-r6w2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--fd655d978--r6w2s-eth0" May 13 23:48:04.486509 containerd[1465]: time="2025-05-13T23:48:04.485498226Z" level=info msg="StartContainer for \"c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761\" returns successfully" May 13 23:48:04.501020 containerd[1465]: time="2025-05-13T23:48:04.500983478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8cfz4,Uid:3a3185ac-12bd-4cbd-938b-54dfdd3c7349,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09\"" May 13 23:48:04.503815 kubelet[2583]: E0513 23:48:04.503785 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:04.504958 containerd[1465]: time="2025-05-13T23:48:04.504844560Z" level=info msg="connecting to shim de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5" address="unix:///run/containerd/s/cb883bf923f0b774052f2f1c07e229d77fd2fe098594885c1c3931a6fb6e2f43" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:04.507850 containerd[1465]: time="2025-05-13T23:48:04.507807527Z" level=info msg="CreateContainer within sandbox \"c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:48:04.525169 containerd[1465]: time="2025-05-13T23:48:04.525062606Z" level=info msg="Container 58c3ac3b3e466fc622a6fc45a8bd24077d85b07fa35eea54c78b7ec4010479c6: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:04.533724 systemd[1]: Started cri-containerd-de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5.scope - libcontainer container de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5. May 13 23:48:04.537337 containerd[1465]: time="2025-05-13T23:48:04.537176977Z" level=info msg="CreateContainer within sandbox \"c6efbaeff80d2fd48d992a561d4f8ff523a2a12917f7f9cfa6066b2730a47e09\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58c3ac3b3e466fc622a6fc45a8bd24077d85b07fa35eea54c78b7ec4010479c6\"" May 13 23:48:04.539413 containerd[1465]: time="2025-05-13T23:48:04.538647660Z" level=info msg="StartContainer for \"58c3ac3b3e466fc622a6fc45a8bd24077d85b07fa35eea54c78b7ec4010479c6\"" May 13 23:48:04.539938 containerd[1465]: time="2025-05-13T23:48:04.539523653Z" level=info msg="connecting to shim 58c3ac3b3e466fc622a6fc45a8bd24077d85b07fa35eea54c78b7ec4010479c6" address="unix:///run/containerd/s/ae41e5cf35827a85f0d7f0427c9da9c493f9ae40e6f978cfd685cf7f0531b25b" protocol=ttrpc version=3 May 13 23:48:04.555542 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:04.568578 systemd[1]: Started cri-containerd-58c3ac3b3e466fc622a6fc45a8bd24077d85b07fa35eea54c78b7ec4010479c6.scope - libcontainer container 58c3ac3b3e466fc622a6fc45a8bd24077d85b07fa35eea54c78b7ec4010479c6. May 13 23:48:04.602054 containerd[1465]: time="2025-05-13T23:48:04.601995063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fd655d978-r6w2s,Uid:9050bd15-4914-4c5d-a7cd-ec2145176ddd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5\"" May 13 23:48:04.620089 containerd[1465]: time="2025-05-13T23:48:04.619943801Z" level=info msg="StartContainer for \"58c3ac3b3e466fc622a6fc45a8bd24077d85b07fa35eea54c78b7ec4010479c6\" returns successfully" May 13 23:48:05.229150 containerd[1465]: time="2025-05-13T23:48:05.229088971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:05.229820 containerd[1465]: time="2025-05-13T23:48:05.229762706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 23:48:05.230480 containerd[1465]: time="2025-05-13T23:48:05.230428080Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:05.233083 containerd[1465]: time="2025-05-13T23:48:05.232859277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:05.233526 containerd[1465]: time="2025-05-13T23:48:05.233492769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 901.152517ms" May 13 23:48:05.233588 containerd[1465]: time="2025-05-13T23:48:05.233528931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 23:48:05.234603 containerd[1465]: time="2025-05-13T23:48:05.234577697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:48:05.236538 containerd[1465]: time="2025-05-13T23:48:05.236220870Z" level=info msg="CreateContainer within sandbox \"14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:48:05.236835 systemd-networkd[1385]: cali082b924f7ba: Gained IPv6LL May 13 23:48:05.244957 containerd[1465]: time="2025-05-13T23:48:05.244909615Z" level=info msg="Container a1a5032894f3a0ead7bb5af9df210028c5dbed43fa24674bf27b3184981cc233: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:05.254958 containerd[1465]: time="2025-05-13T23:48:05.254900425Z" level=info msg="CreateContainer within sandbox \"14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a1a5032894f3a0ead7bb5af9df210028c5dbed43fa24674bf27b3184981cc233\"" May 13 23:48:05.255559 containerd[1465]: time="2025-05-13T23:48:05.255487193Z" level=info msg="StartContainer for \"a1a5032894f3a0ead7bb5af9df210028c5dbed43fa24674bf27b3184981cc233\"" May 13 23:48:05.257118 containerd[1465]: time="2025-05-13T23:48:05.257074401Z" level=info msg="connecting to shim a1a5032894f3a0ead7bb5af9df210028c5dbed43fa24674bf27b3184981cc233" address="unix:///run/containerd/s/680c5853d00fafcb119eea80c05f6987036ebe42ee3ce69e4e4448d093c94a5a" protocol=ttrpc version=3 May 13 23:48:05.278618 systemd[1]: Started cri-containerd-a1a5032894f3a0ead7bb5af9df210028c5dbed43fa24674bf27b3184981cc233.scope - libcontainer container a1a5032894f3a0ead7bb5af9df210028c5dbed43fa24674bf27b3184981cc233. May 13 23:48:05.322730 containerd[1465]: time="2025-05-13T23:48:05.322668883Z" level=info msg="StartContainer for \"a1a5032894f3a0ead7bb5af9df210028c5dbed43fa24674bf27b3184981cc233\" returns successfully" May 13 23:48:05.335329 kubelet[2583]: E0513 23:48:05.333639 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:05.339012 kubelet[2583]: E0513 23:48:05.338736 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:05.353378 kubelet[2583]: I0513 23:48:05.352380 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-764479c758-hpht7" podStartSLOduration=23.791363198 podStartE2EDuration="25.352361411s" podCreationTimestamp="2025-05-13 23:47:40 +0000 UTC" firstStartedPulling="2025-05-13 23:48:02.770505129 +0000 UTC m=+35.720255360" lastFinishedPulling="2025-05-13 23:48:04.331503342 +0000 UTC m=+37.281253573" observedRunningTime="2025-05-13 23:48:05.33718334 +0000 UTC m=+38.286933611" watchObservedRunningTime="2025-05-13 23:48:05.352361411 +0000 UTC m=+38.302111642" May 13 23:48:05.366484 systemd-networkd[1385]: cali03518892b33: Gained IPv6LL May 13 23:48:05.366752 systemd-networkd[1385]: cali31aba53f9b2: Gained IPv6LL May 13 23:48:05.378177 kubelet[2583]: I0513 23:48:05.378107 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8cfz4" podStartSLOduration=31.37798425 podStartE2EDuration="31.37798425s" podCreationTimestamp="2025-05-13 23:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:05.353014784 +0000 UTC m=+38.302765055" watchObservedRunningTime="2025-05-13 23:48:05.37798425 +0000 UTC m=+38.327734481" May 13 23:48:05.620980 systemd-networkd[1385]: cali671da6d26c7: Gained IPv6LL May 13 23:48:06.324730 systemd-networkd[1385]: cali9693f9c6023: Gained IPv6LL May 13 23:48:06.343571 kubelet[2583]: I0513 23:48:06.343530 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:48:06.344120 kubelet[2583]: E0513 23:48:06.343870 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:06.727008 containerd[1465]: time="2025-05-13T23:48:06.726957519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:06.727697 containerd[1465]: time="2025-05-13T23:48:06.727636013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 23:48:06.732229 containerd[1465]: time="2025-05-13T23:48:06.732182252Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:06.736190 containerd[1465]: time="2025-05-13T23:48:06.736113002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:06.737071 containerd[1465]: time="2025-05-13T23:48:06.736767934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.502041745s" May 13 23:48:06.737071 containerd[1465]: time="2025-05-13T23:48:06.736814777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:48:06.738666 containerd[1465]: time="2025-05-13T23:48:06.738636081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:48:06.739496 containerd[1465]: time="2025-05-13T23:48:06.739465507Z" level=info msg="CreateContainer within sandbox \"10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:48:06.752882 containerd[1465]: time="2025-05-13T23:48:06.751984695Z" level=info msg="Container 73fcacc213b9f7466f70278e2dc648ba5842a28c3ef2dfc53f91d49029f3874f: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:06.767972 containerd[1465]: time="2025-05-13T23:48:06.767914233Z" level=info msg="CreateContainer within sandbox \"10cae78efb7f2cef23f209a61294d834743ecd7fb779d403d380203ad2ffc250\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"73fcacc213b9f7466f70278e2dc648ba5842a28c3ef2dfc53f91d49029f3874f\"" May 13 23:48:06.768625 containerd[1465]: time="2025-05-13T23:48:06.768582726Z" level=info msg="StartContainer for \"73fcacc213b9f7466f70278e2dc648ba5842a28c3ef2dfc53f91d49029f3874f\"" May 13 23:48:06.770430 containerd[1465]: time="2025-05-13T23:48:06.770389869Z" level=info msg="connecting to shim 73fcacc213b9f7466f70278e2dc648ba5842a28c3ef2dfc53f91d49029f3874f" address="unix:///run/containerd/s/90d4fc81e085ab121854aa561ad2bd3277fa773a393f76ce1a018c7d74476b70" protocol=ttrpc version=3 May 13 23:48:06.791613 systemd[1]: Started cri-containerd-73fcacc213b9f7466f70278e2dc648ba5842a28c3ef2dfc53f91d49029f3874f.scope - libcontainer container 73fcacc213b9f7466f70278e2dc648ba5842a28c3ef2dfc53f91d49029f3874f. May 13 23:48:06.843193 containerd[1465]: time="2025-05-13T23:48:06.840345394Z" level=info msg="StartContainer for \"73fcacc213b9f7466f70278e2dc648ba5842a28c3ef2dfc53f91d49029f3874f\" returns successfully" May 13 23:48:06.964642 containerd[1465]: time="2025-05-13T23:48:06.964582845Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:06.965394 containerd[1465]: time="2025-05-13T23:48:06.965335065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 23:48:06.967282 containerd[1465]: time="2025-05-13T23:48:06.967239215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 228.565691ms" May 13 23:48:06.967282 containerd[1465]: time="2025-05-13T23:48:06.967279058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:48:06.971260 containerd[1465]: time="2025-05-13T23:48:06.971066277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:48:06.972150 containerd[1465]: time="2025-05-13T23:48:06.972116120Z" level=info msg="CreateContainer within sandbox \"de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:48:06.994545 containerd[1465]: time="2025-05-13T23:48:06.994388279Z" level=info msg="Container e5d1de5abfc1690d77a5d71d678b8b2d9b768866840995de521dc633c391e409: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:07.006844 containerd[1465]: time="2025-05-13T23:48:07.006795886Z" level=info msg="CreateContainer within sandbox \"de88d426d3d555025f4e3ce0fb6b218ecb17815785ceab0e01e2ca78e08f37f5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e5d1de5abfc1690d77a5d71d678b8b2d9b768866840995de521dc633c391e409\"" May 13 23:48:07.007601 containerd[1465]: time="2025-05-13T23:48:07.007461937Z" level=info msg="StartContainer for \"e5d1de5abfc1690d77a5d71d678b8b2d9b768866840995de521dc633c391e409\"" May 13 23:48:07.008939 containerd[1465]: time="2025-05-13T23:48:07.008911129Z" level=info msg="connecting to shim e5d1de5abfc1690d77a5d71d678b8b2d9b768866840995de521dc633c391e409" address="unix:///run/containerd/s/cb883bf923f0b774052f2f1c07e229d77fd2fe098594885c1c3931a6fb6e2f43" protocol=ttrpc version=3 May 13 23:48:07.035622 systemd[1]: Started cri-containerd-e5d1de5abfc1690d77a5d71d678b8b2d9b768866840995de521dc633c391e409.scope - libcontainer container e5d1de5abfc1690d77a5d71d678b8b2d9b768866840995de521dc633c391e409. May 13 23:48:07.081301 containerd[1465]: time="2025-05-13T23:48:07.081260297Z" level=info msg="StartContainer for \"e5d1de5abfc1690d77a5d71d678b8b2d9b768866840995de521dc633c391e409\" returns successfully" May 13 23:48:07.350750 kubelet[2583]: E0513 23:48:07.350337 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:07.387008 kubelet[2583]: I0513 23:48:07.386863 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fd655d978-r6w2s" podStartSLOduration=25.020966347 podStartE2EDuration="27.386841935s" podCreationTimestamp="2025-05-13 23:47:40 +0000 UTC" firstStartedPulling="2025-05-13 23:48:04.605027196 +0000 UTC m=+37.554777427" lastFinishedPulling="2025-05-13 23:48:06.970902784 +0000 UTC m=+39.920653015" observedRunningTime="2025-05-13 23:48:07.383995076 +0000 UTC m=+40.333745307" watchObservedRunningTime="2025-05-13 23:48:07.386841935 +0000 UTC m=+40.336592166" May 13 23:48:07.389693 kubelet[2583]: I0513 23:48:07.387089 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fd655d978-bfhsg" podStartSLOduration=24.551108637 podStartE2EDuration="27.387083473s" podCreationTimestamp="2025-05-13 23:47:40 +0000 UTC" firstStartedPulling="2025-05-13 23:48:03.901912226 +0000 UTC m=+36.851662457" lastFinishedPulling="2025-05-13 23:48:06.737886982 +0000 UTC m=+39.687637293" observedRunningTime="2025-05-13 23:48:07.36339565 +0000 UTC m=+40.313145961" watchObservedRunningTime="2025-05-13 23:48:07.387083473 +0000 UTC m=+40.336833704" May 13 23:48:07.452739 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:52332.service - OpenSSH per-connection server daemon (10.0.0.1:52332). May 13 23:48:07.547211 sshd[4727]: Accepted publickey for core from 10.0.0.1 port 52332 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:07.549351 sshd-session[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:07.554148 systemd-logind[1443]: New session 10 of user core. May 13 23:48:07.564587 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:48:07.972545 sshd[4729]: Connection closed by 10.0.0.1 port 52332 May 13 23:48:07.973217 sshd-session[4727]: pam_unix(sshd:session): session closed for user core May 13 23:48:07.988246 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:52332.service: Deactivated successfully. May 13 23:48:07.991632 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:48:07.992877 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. May 13 23:48:07.997163 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:52348.service - OpenSSH per-connection server daemon (10.0.0.1:52348). May 13 23:48:07.999778 systemd-logind[1443]: Removed session 10. May 13 23:48:08.075445 sshd[4749]: Accepted publickey for core from 10.0.0.1 port 52348 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:08.077840 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:08.084733 systemd-logind[1443]: New session 11 of user core. May 13 23:48:08.094418 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:48:08.352887 kubelet[2583]: I0513 23:48:08.352449 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:48:08.352887 kubelet[2583]: I0513 23:48:08.352448 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:48:08.402156 containerd[1465]: time="2025-05-13T23:48:08.401609035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:08.403941 containerd[1465]: time="2025-05-13T23:48:08.403825881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 23:48:08.405956 containerd[1465]: time="2025-05-13T23:48:08.405895077Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:08.410134 containerd[1465]: time="2025-05-13T23:48:08.410031107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:08.411470 containerd[1465]: time="2025-05-13T23:48:08.411418611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.440284529s" May 13 23:48:08.411470 containerd[1465]: time="2025-05-13T23:48:08.411464015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 23:48:08.417076 sshd[4756]: Connection closed by 10.0.0.1 port 52348 May 13 23:48:08.417556 sshd-session[4749]: pam_unix(sshd:session): session closed for user core May 13 23:48:08.418271 containerd[1465]: time="2025-05-13T23:48:08.418202241Z" level=info msg="CreateContainer within sandbox \"14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:48:08.435608 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:52350.service - OpenSSH per-connection server daemon (10.0.0.1:52350). May 13 23:48:08.450039 containerd[1465]: time="2025-05-13T23:48:08.448625405Z" level=info msg="Container 848990823fb4c457856131d9162d839c51960be564535abc0621a2fdedfe720e: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:08.456332 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:52348.service: Deactivated successfully. May 13 23:48:08.473764 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:48:08.480046 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. May 13 23:48:08.486941 containerd[1465]: time="2025-05-13T23:48:08.486797910Z" level=info msg="CreateContainer within sandbox \"14040029827132ef54a966d9270707fc92a2b7817bc6cbe83e5de644539b36b1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"848990823fb4c457856131d9162d839c51960be564535abc0621a2fdedfe720e\"" May 13 23:48:08.487792 systemd-logind[1443]: Removed session 11. May 13 23:48:08.490812 containerd[1465]: time="2025-05-13T23:48:08.489130085Z" level=info msg="StartContainer for \"848990823fb4c457856131d9162d839c51960be564535abc0621a2fdedfe720e\"" May 13 23:48:08.490812 containerd[1465]: time="2025-05-13T23:48:08.490610117Z" level=info msg="connecting to shim 848990823fb4c457856131d9162d839c51960be564535abc0621a2fdedfe720e" address="unix:///run/containerd/s/680c5853d00fafcb119eea80c05f6987036ebe42ee3ce69e4e4448d093c94a5a" protocol=ttrpc version=3 May 13 23:48:08.522598 systemd[1]: Started cri-containerd-848990823fb4c457856131d9162d839c51960be564535abc0621a2fdedfe720e.scope - libcontainer container 848990823fb4c457856131d9162d839c51960be564535abc0621a2fdedfe720e. May 13 23:48:08.537571 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 52350 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:08.539171 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:08.546273 systemd-logind[1443]: New session 12 of user core. May 13 23:48:08.548648 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:48:08.599954 containerd[1465]: time="2025-05-13T23:48:08.599049178Z" level=info msg="StartContainer for \"848990823fb4c457856131d9162d839c51960be564535abc0621a2fdedfe720e\" returns successfully" May 13 23:48:08.844688 sshd[4791]: Connection closed by 10.0.0.1 port 52350 May 13 23:48:08.846811 sshd-session[4767]: pam_unix(sshd:session): session closed for user core May 13 23:48:08.850664 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:52350.service: Deactivated successfully. May 13 23:48:08.855530 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:48:08.857914 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. May 13 23:48:08.860102 systemd-logind[1443]: Removed session 12. May 13 23:48:09.237700 kubelet[2583]: I0513 23:48:09.237655 2583 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:48:09.241861 kubelet[2583]: I0513 23:48:09.241820 2583 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:48:09.372629 kubelet[2583]: I0513 23:48:09.372379 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-btdr4" podStartSLOduration=24.733177695 podStartE2EDuration="29.372361535s" podCreationTimestamp="2025-05-13 23:47:40 +0000 UTC" firstStartedPulling="2025-05-13 23:48:03.775762756 +0000 UTC m=+36.725512987" lastFinishedPulling="2025-05-13 23:48:08.414946636 +0000 UTC m=+41.364696827" observedRunningTime="2025-05-13 23:48:09.371785573 +0000 UTC m=+42.321535804" watchObservedRunningTime="2025-05-13 23:48:09.372361535 +0000 UTC m=+42.322111766" May 13 23:48:11.425293 kubelet[2583]: I0513 23:48:11.425209 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:48:11.465179 containerd[1465]: time="2025-05-13T23:48:11.465118594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761\" id:\"17fb8ff500ccd126db63bc89d35240a6e1a9cf2c3fd86fe274d1254f2847338c\" pid:4837 exited_at:{seconds:1747180091 nanos:464748528}" May 13 23:48:11.535426 containerd[1465]: time="2025-05-13T23:48:11.529716442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761\" id:\"1c8c9cb409828b06679bd2816ac2f6067d3d04e5a02a5fe1749d25e74af9ba80\" pid:4859 exited_at:{seconds:1747180091 nanos:529378338}" May 13 23:48:13.865270 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:35088.service - OpenSSH per-connection server daemon (10.0.0.1:35088). May 13 23:48:13.943478 sshd[4870]: Accepted publickey for core from 10.0.0.1 port 35088 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:13.945542 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:13.953256 systemd-logind[1443]: New session 13 of user core. May 13 23:48:13.960586 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:48:14.174533 sshd[4872]: Connection closed by 10.0.0.1 port 35088 May 13 23:48:14.175109 sshd-session[4870]: pam_unix(sshd:session): session closed for user core May 13 23:48:14.178842 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:35088.service: Deactivated successfully. May 13 23:48:14.180722 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:48:14.181851 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. May 13 23:48:14.183097 systemd-logind[1443]: Removed session 13. May 13 23:48:19.188090 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:35188.service - OpenSSH per-connection server daemon (10.0.0.1:35188). May 13 23:48:19.238043 sshd[4894]: Accepted publickey for core from 10.0.0.1 port 35188 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:19.239658 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:19.244639 systemd-logind[1443]: New session 14 of user core. May 13 23:48:19.253649 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:48:19.456141 sshd[4896]: Connection closed by 10.0.0.1 port 35188 May 13 23:48:19.457162 sshd-session[4894]: pam_unix(sshd:session): session closed for user core May 13 23:48:19.466491 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:35188.service: Deactivated successfully. May 13 23:48:19.473327 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:48:19.476501 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. May 13 23:48:19.477626 systemd-logind[1443]: Removed session 14. May 13 23:48:24.482076 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:37770.service - OpenSSH per-connection server daemon (10.0.0.1:37770). May 13 23:48:24.556044 sshd[4909]: Accepted publickey for core from 10.0.0.1 port 37770 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:24.557750 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:24.568341 systemd-logind[1443]: New session 15 of user core. May 13 23:48:24.573925 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:48:24.756423 sshd[4914]: Connection closed by 10.0.0.1 port 37770 May 13 23:48:24.756070 sshd-session[4909]: pam_unix(sshd:session): session closed for user core May 13 23:48:24.767481 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:37770.service: Deactivated successfully. May 13 23:48:24.769886 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:48:24.771247 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. May 13 23:48:24.774076 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:37772.service - OpenSSH per-connection server daemon (10.0.0.1:37772). May 13 23:48:24.775315 systemd-logind[1443]: Removed session 15. May 13 23:48:24.844988 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:24.846624 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:24.851472 systemd-logind[1443]: New session 16 of user core. May 13 23:48:24.860609 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:48:25.196430 sshd[4930]: Connection closed by 10.0.0.1 port 37772 May 13 23:48:25.197908 sshd-session[4927]: pam_unix(sshd:session): session closed for user core May 13 23:48:25.211570 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:37772.service: Deactivated successfully. May 13 23:48:25.214237 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:48:25.215177 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. May 13 23:48:25.217929 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:37774.service - OpenSSH per-connection server daemon (10.0.0.1:37774). May 13 23:48:25.221363 systemd-logind[1443]: Removed session 16. May 13 23:48:25.316866 sshd[4941]: Accepted publickey for core from 10.0.0.1 port 37774 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:25.321431 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:25.329976 systemd-logind[1443]: New session 17 of user core. May 13 23:48:25.336637 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:48:25.375769 containerd[1465]: time="2025-05-13T23:48:25.375639890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94cb400eeed376aaa8c4a171b23969264ad4ef07040011f01b5e3ee91ba81dbf\" id:\"024675571302ffe5084a8881b245d06b95cf037d52ca1d8b0a0edb4cffae7fa0\" pid:4956 exited_at:{seconds:1747180105 nanos:375299871}" May 13 23:48:25.378396 kubelet[2583]: E0513 23:48:25.378328 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:26.293195 sshd[4966]: Connection closed by 10.0.0.1 port 37774 May 13 23:48:26.293973 sshd-session[4941]: pam_unix(sshd:session): session closed for user core May 13 23:48:26.307254 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:37780.service - OpenSSH per-connection server daemon (10.0.0.1:37780). May 13 23:48:26.309935 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:37774.service: Deactivated successfully. May 13 23:48:26.313210 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:48:26.315306 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. May 13 23:48:26.321469 systemd-logind[1443]: Removed session 17. May 13 23:48:26.377899 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 37780 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:26.380494 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:26.389047 systemd-logind[1443]: New session 18 of user core. May 13 23:48:26.398638 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:48:26.818901 sshd[4988]: Connection closed by 10.0.0.1 port 37780 May 13 23:48:26.819480 sshd-session[4982]: pam_unix(sshd:session): session closed for user core May 13 23:48:26.830477 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:37796.service - OpenSSH per-connection server daemon (10.0.0.1:37796). May 13 23:48:26.831043 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:37780.service: Deactivated successfully. May 13 23:48:26.835300 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:48:26.839023 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. May 13 23:48:26.841607 systemd-logind[1443]: Removed session 18. May 13 23:48:26.895047 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 37796 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:26.898150 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:26.904671 systemd-logind[1443]: New session 19 of user core. May 13 23:48:26.910634 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:48:27.128644 sshd[5002]: Connection closed by 10.0.0.1 port 37796 May 13 23:48:27.129116 sshd-session[4997]: pam_unix(sshd:session): session closed for user core May 13 23:48:27.140622 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:37796.service: Deactivated successfully. May 13 23:48:27.150666 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:48:27.153968 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. May 13 23:48:27.155238 systemd-logind[1443]: Removed session 19. May 13 23:48:28.774540 kubelet[2583]: I0513 23:48:28.774491 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:48:32.145605 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:37858.service - OpenSSH per-connection server daemon (10.0.0.1:37858). May 13 23:48:32.247536 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 37858 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:32.250000 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:32.254830 systemd-logind[1443]: New session 20 of user core. May 13 23:48:32.264607 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:48:32.429935 sshd[5026]: Connection closed by 10.0.0.1 port 37858 May 13 23:48:32.431782 sshd-session[5024]: pam_unix(sshd:session): session closed for user core May 13 23:48:32.437349 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:37858.service: Deactivated successfully. May 13 23:48:32.439992 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:48:32.443010 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. May 13 23:48:32.444529 systemd-logind[1443]: Removed session 20. May 13 23:48:36.137106 kubelet[2583]: E0513 23:48:36.137012 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:37.442927 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:46958.service - OpenSSH per-connection server daemon (10.0.0.1:46958). May 13 23:48:37.503471 sshd[5041]: Accepted publickey for core from 10.0.0.1 port 46958 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:37.504206 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:37.508131 systemd-logind[1443]: New session 21 of user core. May 13 23:48:37.516582 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:48:37.704631 sshd[5043]: Connection closed by 10.0.0.1 port 46958 May 13 23:48:37.707347 sshd-session[5041]: pam_unix(sshd:session): session closed for user core May 13 23:48:37.710968 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:46958.service: Deactivated successfully. May 13 23:48:37.712812 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:48:37.715230 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. May 13 23:48:37.716521 systemd-logind[1443]: Removed session 21. May 13 23:48:39.139833 kubelet[2583]: E0513 23:48:39.138033 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:41.517734 containerd[1465]: time="2025-05-13T23:48:41.517688430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c04db41db5593077ca8f61ae0b80e692af42294de24182bdde0c23026dd03761\" id:\"67eb250e26020fd4246164835e4a8e2d8636e6eeada826e18c04c9b2b0fae3e6\" pid:5075 exited_at:{seconds:1747180121 nanos:517385152}" May 13 23:48:42.721497 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:47314.service - OpenSSH per-connection server daemon (10.0.0.1:47314). May 13 23:48:42.805040 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 47314 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:42.806967 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:42.811021 systemd-logind[1443]: New session 22 of user core. May 13 23:48:42.818591 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:48:42.992275 sshd[5089]: Connection closed by 10.0.0.1 port 47314 May 13 23:48:42.993446 sshd-session[5087]: pam_unix(sshd:session): session closed for user core May 13 23:48:42.997725 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:47314.service: Deactivated successfully. May 13 23:48:42.999673 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:48:43.002258 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. May 13 23:48:43.003111 systemd-logind[1443]: Removed session 22. May 13 23:48:44.136954 kubelet[2583]: E0513 23:48:44.136854 2583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:48.004336 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:47322.service - OpenSSH per-connection server daemon (10.0.0.1:47322). May 13 23:48:48.066730 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 47322 ssh2: RSA SHA256:mw68dZYQU0J8UXjv1qvX457MoBIWfYiH3KbOSP4fCfE May 13 23:48:48.068256 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:48.074062 systemd-logind[1443]: New session 23 of user core. May 13 23:48:48.084318 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:48:48.250417 sshd[5107]: Connection closed by 10.0.0.1 port 47322 May 13 23:48:48.250982 sshd-session[5105]: pam_unix(sshd:session): session closed for user core May 13 23:48:48.254576 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:47322.service: Deactivated successfully. May 13 23:48:48.256467 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:48:48.258151 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. May 13 23:48:48.259332 systemd-logind[1443]: Removed session 23. May 13 23:48:48.701854 kubelet[2583]: I0513 23:48:48.701658 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"