Jul 9 10:11:44.824295 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 9 10:11:44.824316 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 08:35:24 -00 2025 Jul 9 10:11:44.824326 kernel: KASLR enabled Jul 9 10:11:44.824331 kernel: efi: EFI v2.7 by EDK II Jul 9 10:11:44.824337 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 9 10:11:44.824343 kernel: random: crng init done Jul 9 10:11:44.824350 kernel: secureboot: Secure boot disabled Jul 9 10:11:44.824356 kernel: ACPI: Early table checksum verification disabled Jul 9 10:11:44.824362 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 9 10:11:44.824370 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 9 10:11:44.824376 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824382 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824388 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824395 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824402 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824409 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824416 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824422 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824428 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:44.824434 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 9 10:11:44.824440 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 10:11:44.824447 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 10:11:44.824453 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 9 10:11:44.824459 kernel: Zone ranges: Jul 9 10:11:44.824465 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 10:11:44.824472 kernel: DMA32 empty Jul 9 10:11:44.824478 kernel: Normal empty Jul 9 10:11:44.824484 kernel: Device empty Jul 9 10:11:44.824490 kernel: Movable zone start for each node Jul 9 10:11:44.824496 kernel: Early memory node ranges Jul 9 10:11:44.824502 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 9 10:11:44.824508 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 9 10:11:44.824514 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 9 10:11:44.824520 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 9 10:11:44.824526 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 9 10:11:44.824532 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 9 10:11:44.824538 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 9 10:11:44.824546 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 9 10:11:44.824552 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 9 10:11:44.824558 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 9 10:11:44.824567 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 9 10:11:44.824573 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 9 10:11:44.824580 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 9 10:11:44.824588 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 10:11:44.824595 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 9 10:11:44.824602 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 9 10:11:44.824608 kernel: psci: probing for conduit method from ACPI. Jul 9 10:11:44.824615 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 10:11:44.824621 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 10:11:44.824628 kernel: psci: Trusted OS migration not required Jul 9 10:11:44.824634 kernel: psci: SMC Calling Convention v1.1 Jul 9 10:11:44.824641 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 9 10:11:44.824648 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 10:11:44.824655 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 10:11:44.824662 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 9 10:11:44.824668 kernel: Detected PIPT I-cache on CPU0 Jul 9 10:11:44.824675 kernel: CPU features: detected: GIC system register CPU interface Jul 9 10:11:44.824682 kernel: CPU features: detected: Spectre-v4 Jul 9 10:11:44.824688 kernel: CPU features: detected: Spectre-BHB Jul 9 10:11:44.824695 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 10:11:44.824701 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 10:11:44.824719 kernel: CPU features: detected: ARM erratum 1418040 Jul 9 10:11:44.824726 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 10:11:44.824732 kernel: alternatives: applying boot alternatives Jul 9 10:11:44.824740 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74a33b1d464884e3b2573e51f747b6939e1912812116b4748b2b08804b5b74c1 Jul 9 10:11:44.824748 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 10:11:44.824756 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 10:11:44.824762 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 10:11:44.824769 kernel: Fallback order for Node 0: 0 Jul 9 10:11:44.824776 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 9 10:11:44.824782 kernel: Policy zone: DMA Jul 9 10:11:44.824789 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 10:11:44.824796 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 9 10:11:44.824802 kernel: software IO TLB: area num 4. Jul 9 10:11:44.824809 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 9 10:11:44.824816 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 9 10:11:44.824824 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 10:11:44.824830 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 10:11:44.824838 kernel: rcu: RCU event tracing is enabled. Jul 9 10:11:44.824845 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 10:11:44.824852 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 10:11:44.824858 kernel: Tracing variant of Tasks RCU enabled. Jul 9 10:11:44.824865 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 10:11:44.824872 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 10:11:44.824878 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 10:11:44.824886 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 10:11:44.824892 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 10:11:44.824900 kernel: GICv3: 256 SPIs implemented Jul 9 10:11:44.824907 kernel: GICv3: 0 Extended SPIs implemented Jul 9 10:11:44.824913 kernel: Root IRQ handler: gic_handle_irq Jul 9 10:11:44.824920 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 9 10:11:44.824926 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 9 10:11:44.824933 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 9 10:11:44.824940 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 9 10:11:44.824946 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 9 10:11:44.824953 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 9 10:11:44.824959 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 9 10:11:44.824966 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 9 10:11:44.824973 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 10:11:44.824981 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 10:11:44.824987 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 9 10:11:44.824995 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 9 10:11:44.825001 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 9 10:11:44.825014 kernel: arm-pv: using stolen time PV Jul 9 10:11:44.825022 kernel: Console: colour dummy device 80x25 Jul 9 10:11:44.825029 kernel: ACPI: Core revision 20240827 Jul 9 10:11:44.825036 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 9 10:11:44.825043 kernel: pid_max: default: 32768 minimum: 301 Jul 9 10:11:44.825050 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 10:11:44.825059 kernel: landlock: Up and running. Jul 9 10:11:44.825065 kernel: SELinux: Initializing. Jul 9 10:11:44.825072 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 10:11:44.825084 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 10:11:44.825091 kernel: rcu: Hierarchical SRCU implementation. Jul 9 10:11:44.825098 kernel: rcu: Max phase no-delay instances is 400. Jul 9 10:11:44.825104 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 10:11:44.825111 kernel: Remapping and enabling EFI services. Jul 9 10:11:44.825117 kernel: smp: Bringing up secondary CPUs ... Jul 9 10:11:44.825141 kernel: Detected PIPT I-cache on CPU1 Jul 9 10:11:44.825148 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 9 10:11:44.825155 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 9 10:11:44.825164 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 10:11:44.825171 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 9 10:11:44.825178 kernel: Detected PIPT I-cache on CPU2 Jul 9 10:11:44.825185 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 9 10:11:44.825192 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 9 10:11:44.825201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 10:11:44.825208 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 9 10:11:44.825215 kernel: Detected PIPT I-cache on CPU3 Jul 9 10:11:44.825222 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 9 10:11:44.825230 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 9 10:11:44.825237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 10:11:44.825244 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 9 10:11:44.825251 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 10:11:44.825257 kernel: SMP: Total of 4 processors activated. Jul 9 10:11:44.825265 kernel: CPU: All CPU(s) started at EL1 Jul 9 10:11:44.825272 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 10:11:44.825279 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 10:11:44.825286 kernel: CPU features: detected: Common not Private translations Jul 9 10:11:44.825293 kernel: CPU features: detected: CRC32 instructions Jul 9 10:11:44.825300 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 9 10:11:44.825307 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 10:11:44.825314 kernel: CPU features: detected: LSE atomic instructions Jul 9 10:11:44.825322 kernel: CPU features: detected: Privileged Access Never Jul 9 10:11:44.825330 kernel: CPU features: detected: RAS Extension Support Jul 9 10:11:44.825337 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 10:11:44.825345 kernel: alternatives: applying system-wide alternatives Jul 9 10:11:44.825351 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 9 10:11:44.825359 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 9 10:11:44.825366 kernel: devtmpfs: initialized Jul 9 10:11:44.825373 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 10:11:44.825380 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 10:11:44.825387 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 10:11:44.825395 kernel: 0 pages in range for non-PLT usage Jul 9 10:11:44.825401 kernel: 508448 pages in range for PLT usage Jul 9 10:11:44.825408 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 10:11:44.825415 kernel: SMBIOS 3.0.0 present. Jul 9 10:11:44.825422 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 9 10:11:44.825429 kernel: DMI: Memory slots populated: 1/1 Jul 9 10:11:44.825435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 10:11:44.825442 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 10:11:44.825449 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 10:11:44.825457 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 10:11:44.825464 kernel: audit: initializing netlink subsys (disabled) Jul 9 10:11:44.825472 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 9 10:11:44.825478 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 10:11:44.825485 kernel: cpuidle: using governor menu Jul 9 10:11:44.825492 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 10:11:44.825499 kernel: ASID allocator initialised with 32768 entries Jul 9 10:11:44.825506 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 10:11:44.825513 kernel: Serial: AMBA PL011 UART driver Jul 9 10:11:44.825521 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 10:11:44.825528 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 10:11:44.825535 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 10:11:44.825542 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 10:11:44.825550 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 10:11:44.825556 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 10:11:44.825563 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 10:11:44.825570 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 10:11:44.825577 kernel: ACPI: Added _OSI(Module Device) Jul 9 10:11:44.825585 kernel: ACPI: Added _OSI(Processor Device) Jul 9 10:11:44.825592 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 10:11:44.825599 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 10:11:44.825606 kernel: ACPI: Interpreter enabled Jul 9 10:11:44.825613 kernel: ACPI: Using GIC for interrupt routing Jul 9 10:11:44.825619 kernel: ACPI: MCFG table detected, 1 entries Jul 9 10:11:44.825626 kernel: ACPI: CPU0 has been hot-added Jul 9 10:11:44.825633 kernel: ACPI: CPU1 has been hot-added Jul 9 10:11:44.825640 kernel: ACPI: CPU2 has been hot-added Jul 9 10:11:44.825647 kernel: ACPI: CPU3 has been hot-added Jul 9 10:11:44.825655 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 9 10:11:44.825662 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 10:11:44.825668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 10:11:44.825854 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 10:11:44.825926 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 10:11:44.825986 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 10:11:44.826055 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 9 10:11:44.826128 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 9 10:11:44.826141 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 9 10:11:44.826151 kernel: PCI host bridge to bus 0000:00 Jul 9 10:11:44.826237 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 9 10:11:44.826295 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 10:11:44.826351 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 9 10:11:44.826406 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 10:11:44.826487 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 9 10:11:44.826561 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 10:11:44.826627 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 9 10:11:44.826691 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 9 10:11:44.826766 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 10:11:44.826830 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 9 10:11:44.826894 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 9 10:11:44.826962 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 9 10:11:44.827040 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 9 10:11:44.827099 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 10:11:44.827156 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 9 10:11:44.827166 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 10:11:44.827174 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 10:11:44.827181 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 10:11:44.827190 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 10:11:44.827198 kernel: iommu: Default domain type: Translated Jul 9 10:11:44.827205 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 10:11:44.827212 kernel: efivars: Registered efivars operations Jul 9 10:11:44.827219 kernel: vgaarb: loaded Jul 9 10:11:44.827226 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 10:11:44.827233 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 10:11:44.827240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 10:11:44.827248 kernel: pnp: PnP ACPI init Jul 9 10:11:44.827323 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 9 10:11:44.827334 kernel: pnp: PnP ACPI: found 1 devices Jul 9 10:11:44.827342 kernel: NET: Registered PF_INET protocol family Jul 9 10:11:44.827349 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 10:11:44.827356 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 10:11:44.827363 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 10:11:44.827370 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 10:11:44.827378 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 10:11:44.827387 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 10:11:44.827394 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 10:11:44.827401 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 10:11:44.827408 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 10:11:44.827415 kernel: PCI: CLS 0 bytes, default 64 Jul 9 10:11:44.827422 kernel: kvm [1]: HYP mode not available Jul 9 10:11:44.827429 kernel: Initialise system trusted keyrings Jul 9 10:11:44.827436 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 10:11:44.827443 kernel: Key type asymmetric registered Jul 9 10:11:44.827452 kernel: Asymmetric key parser 'x509' registered Jul 9 10:11:44.827459 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 10:11:44.827466 kernel: io scheduler mq-deadline registered Jul 9 10:11:44.827473 kernel: io scheduler kyber registered Jul 9 10:11:44.827481 kernel: io scheduler bfq registered Jul 9 10:11:44.827488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 10:11:44.827495 kernel: ACPI: button: Power Button [PWRB] Jul 9 10:11:44.827502 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 10:11:44.827566 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 9 10:11:44.827577 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 10:11:44.827584 kernel: thunder_xcv, ver 1.0 Jul 9 10:11:44.827591 kernel: thunder_bgx, ver 1.0 Jul 9 10:11:44.827598 kernel: nicpf, ver 1.0 Jul 9 10:11:44.827605 kernel: nicvf, ver 1.0 Jul 9 10:11:44.827673 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 10:11:44.827745 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T10:11:44 UTC (1752055904) Jul 9 10:11:44.827755 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 10:11:44.827764 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 10:11:44.827771 kernel: watchdog: NMI not fully supported Jul 9 10:11:44.827779 kernel: watchdog: Hard watchdog permanently disabled Jul 9 10:11:44.827786 kernel: NET: Registered PF_INET6 protocol family Jul 9 10:11:44.827792 kernel: Segment Routing with IPv6 Jul 9 10:11:44.827799 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 10:11:44.827806 kernel: NET: Registered PF_PACKET protocol family Jul 9 10:11:44.827813 kernel: Key type dns_resolver registered Jul 9 10:11:44.827820 kernel: registered taskstats version 1 Jul 9 10:11:44.827827 kernel: Loading compiled-in X.509 certificates Jul 9 10:11:44.827836 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 3af455426f266805bd3cf61871c72c3a0bf9894a' Jul 9 10:11:44.827843 kernel: Demotion targets for Node 0: null Jul 9 10:11:44.827850 kernel: Key type .fscrypt registered Jul 9 10:11:44.827857 kernel: Key type fscrypt-provisioning registered Jul 9 10:11:44.827864 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 10:11:44.827871 kernel: ima: Allocated hash algorithm: sha1 Jul 9 10:11:44.827879 kernel: ima: No architecture policies found Jul 9 10:11:44.827886 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 10:11:44.827894 kernel: clk: Disabling unused clocks Jul 9 10:11:44.827902 kernel: PM: genpd: Disabling unused power domains Jul 9 10:11:44.827909 kernel: Warning: unable to open an initial console. Jul 9 10:11:44.827916 kernel: Freeing unused kernel memory: 39424K Jul 9 10:11:44.827923 kernel: Run /init as init process Jul 9 10:11:44.827930 kernel: with arguments: Jul 9 10:11:44.827938 kernel: /init Jul 9 10:11:44.827945 kernel: with environment: Jul 9 10:11:44.827951 kernel: HOME=/ Jul 9 10:11:44.827960 kernel: TERM=linux Jul 9 10:11:44.827967 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 10:11:44.827975 systemd[1]: Successfully made /usr/ read-only. Jul 9 10:11:44.827986 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 10:11:44.827994 systemd[1]: Detected virtualization kvm. Jul 9 10:11:44.828001 systemd[1]: Detected architecture arm64. Jul 9 10:11:44.828015 systemd[1]: Running in initrd. Jul 9 10:11:44.828024 systemd[1]: No hostname configured, using default hostname. Jul 9 10:11:44.828034 systemd[1]: Hostname set to . Jul 9 10:11:44.828042 systemd[1]: Initializing machine ID from VM UUID. Jul 9 10:11:44.828052 systemd[1]: Queued start job for default target initrd.target. Jul 9 10:11:44.828060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:11:44.828068 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:11:44.828076 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 10:11:44.828084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 10:11:44.828091 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 10:11:44.828102 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 10:11:44.828110 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 10:11:44.828117 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 10:11:44.828125 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:11:44.828132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:11:44.828140 systemd[1]: Reached target paths.target - Path Units. Jul 9 10:11:44.828148 systemd[1]: Reached target slices.target - Slice Units. Jul 9 10:11:44.828156 systemd[1]: Reached target swap.target - Swaps. Jul 9 10:11:44.828163 systemd[1]: Reached target timers.target - Timer Units. Jul 9 10:11:44.828171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 10:11:44.828179 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 10:11:44.828186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 10:11:44.828194 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 10:11:44.828201 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:11:44.828209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 10:11:44.828218 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:11:44.828231 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 10:11:44.828238 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 10:11:44.828246 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 10:11:44.828253 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 10:11:44.828262 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 10:11:44.828270 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 10:11:44.828278 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 10:11:44.828286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 10:11:44.828295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:11:44.828304 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:11:44.828312 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 10:11:44.828320 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 10:11:44.828329 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 10:11:44.828356 systemd-journald[244]: Collecting audit messages is disabled. Jul 9 10:11:44.828375 systemd-journald[244]: Journal started Jul 9 10:11:44.828395 systemd-journald[244]: Runtime Journal (/run/log/journal/df5bc630858c4511bd4e35f825adcea5) is 6M, max 48.5M, 42.4M free. Jul 9 10:11:44.825110 systemd-modules-load[245]: Inserted module 'overlay' Jul 9 10:11:44.835019 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:44.838525 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 10:11:44.839365 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 10:11:44.843555 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 10:11:44.842783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 10:11:44.845233 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 10:11:44.848578 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 9 10:11:44.849556 kernel: Bridge firewalling registered Jul 9 10:11:44.856792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 10:11:44.858348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 10:11:44.861458 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:11:44.865823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:11:44.866951 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 10:11:44.870191 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:11:44.873279 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 10:11:44.874845 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:11:44.879964 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 10:11:44.882580 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 10:11:44.909224 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74a33b1d464884e3b2573e51f747b6939e1912812116b4748b2b08804b5b74c1 Jul 9 10:11:44.923843 systemd-resolved[287]: Positive Trust Anchors: Jul 9 10:11:44.923861 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 10:11:44.923893 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 10:11:44.929795 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 9 10:11:44.930775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 10:11:44.934083 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:11:44.989745 kernel: SCSI subsystem initialized Jul 9 10:11:44.994725 kernel: Loading iSCSI transport class v2.0-870. Jul 9 10:11:45.001750 kernel: iscsi: registered transport (tcp) Jul 9 10:11:45.016740 kernel: iscsi: registered transport (qla4xxx) Jul 9 10:11:45.016765 kernel: QLogic iSCSI HBA Driver Jul 9 10:11:45.032056 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 10:11:45.045685 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 10:11:45.049127 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 10:11:45.091209 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 10:11:45.093521 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 10:11:45.161737 kernel: raid6: neonx8 gen() 15782 MB/s Jul 9 10:11:45.178743 kernel: raid6: neonx4 gen() 15786 MB/s Jul 9 10:11:45.196747 kernel: raid6: neonx2 gen() 13164 MB/s Jul 9 10:11:45.212735 kernel: raid6: neonx1 gen() 10431 MB/s Jul 9 10:11:45.229741 kernel: raid6: int64x8 gen() 6878 MB/s Jul 9 10:11:45.246733 kernel: raid6: int64x4 gen() 7321 MB/s Jul 9 10:11:45.263732 kernel: raid6: int64x2 gen() 6074 MB/s Jul 9 10:11:45.280833 kernel: raid6: int64x1 gen() 5043 MB/s Jul 9 10:11:45.280847 kernel: raid6: using algorithm neonx4 gen() 15786 MB/s Jul 9 10:11:45.298852 kernel: raid6: .... xor() 12275 MB/s, rmw enabled Jul 9 10:11:45.298873 kernel: raid6: using neon recovery algorithm Jul 9 10:11:45.306168 kernel: xor: measuring software checksum speed Jul 9 10:11:45.306185 kernel: 8regs : 21579 MB/sec Jul 9 10:11:45.306830 kernel: 32regs : 21681 MB/sec Jul 9 10:11:45.308082 kernel: arm64_neon : 27860 MB/sec Jul 9 10:11:45.308094 kernel: xor: using function: arm64_neon (27860 MB/sec) Jul 9 10:11:45.361739 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 10:11:45.369738 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 10:11:45.372315 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:11:45.397324 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jul 9 10:11:45.401861 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:11:45.404599 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 10:11:45.433941 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Jul 9 10:11:45.454940 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 10:11:45.457216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 10:11:45.509756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:11:45.512368 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 10:11:45.559676 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 9 10:11:45.559839 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 10:11:45.562757 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 10:11:45.562790 kernel: GPT:9289727 != 19775487 Jul 9 10:11:45.562799 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 10:11:45.564041 kernel: GPT:9289727 != 19775487 Jul 9 10:11:45.564070 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 10:11:45.564756 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 10:11:45.566937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:11:45.565023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:45.569029 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:11:45.571129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:11:45.592205 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 10:11:45.593656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:45.600466 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 10:11:45.612889 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 10:11:45.624380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 10:11:45.630488 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 10:11:45.631718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 10:11:45.634023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 10:11:45.636838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:11:45.638853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 10:11:45.641451 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 10:11:45.643240 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 10:11:45.659964 disk-uuid[588]: Primary Header is updated. Jul 9 10:11:45.659964 disk-uuid[588]: Secondary Entries is updated. Jul 9 10:11:45.659964 disk-uuid[588]: Secondary Header is updated. Jul 9 10:11:45.664730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:11:45.664910 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 10:11:46.670739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:11:46.671406 disk-uuid[592]: The operation has completed successfully. Jul 9 10:11:46.699968 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 10:11:46.700763 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 10:11:46.721530 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 10:11:46.744066 sh[612]: Success Jul 9 10:11:46.758730 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 10:11:46.758773 kernel: device-mapper: uevent: version 1.0.3 Jul 9 10:11:46.760486 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 10:11:46.770726 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 10:11:46.796563 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 10:11:46.799416 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 10:11:46.813376 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 10:11:46.820732 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 10:11:46.823741 kernel: BTRFS: device fsid b890ad05-381e-41d5-a872-05bd1f9d6a23 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (624) Jul 9 10:11:46.823774 kernel: BTRFS info (device dm-0): first mount of filesystem b890ad05-381e-41d5-a872-05bd1f9d6a23 Jul 9 10:11:46.823785 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 10:11:46.825312 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 10:11:46.828567 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 10:11:46.829860 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 10:11:46.831363 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 10:11:46.832105 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 10:11:46.833720 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 10:11:46.854287 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 9 10:11:46.854329 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:46.855457 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 10:11:46.855484 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 10:11:46.861734 kernel: BTRFS info (device vda6): last unmount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:46.862329 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 10:11:46.864236 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 10:11:46.931818 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 10:11:46.934852 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 10:11:46.976104 systemd-networkd[798]: lo: Link UP Jul 9 10:11:46.976115 systemd-networkd[798]: lo: Gained carrier Jul 9 10:11:46.976899 systemd-networkd[798]: Enumeration completed Jul 9 10:11:46.977179 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 10:11:46.977396 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:11:46.977400 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 10:11:46.978136 systemd-networkd[798]: eth0: Link UP Jul 9 10:11:46.978138 systemd-networkd[798]: eth0: Gained carrier Jul 9 10:11:46.978146 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:11:46.979133 systemd[1]: Reached target network.target - Network. Jul 9 10:11:47.001759 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 10:11:47.015853 ignition[700]: Ignition 2.21.0 Jul 9 10:11:47.015865 ignition[700]: Stage: fetch-offline Jul 9 10:11:47.015899 ignition[700]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:47.015906 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:47.016088 ignition[700]: parsed url from cmdline: "" Jul 9 10:11:47.016091 ignition[700]: no config URL provided Jul 9 10:11:47.016095 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 10:11:47.016102 ignition[700]: no config at "/usr/lib/ignition/user.ign" Jul 9 10:11:47.016121 ignition[700]: op(1): [started] loading QEMU firmware config module Jul 9 10:11:47.016125 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 10:11:47.022723 ignition[700]: op(1): [finished] loading QEMU firmware config module Jul 9 10:11:47.022741 ignition[700]: QEMU firmware config was not found. Ignoring... Jul 9 10:11:47.061810 ignition[700]: parsing config with SHA512: ed1cef318da40fd57e28a59421b90789cab2a25a8c00fbd4b424c7b5d3cb623c5b1a4743ae3051c718bc1ee4f9b24d48b16ff06e2df4f56e28ad757381e3238b Jul 9 10:11:47.065663 unknown[700]: fetched base config from "system" Jul 9 10:11:47.065675 unknown[700]: fetched user config from "qemu" Jul 9 10:11:47.066084 ignition[700]: fetch-offline: fetch-offline passed Jul 9 10:11:47.067607 systemd-resolved[287]: Detected conflict on linux IN A 10.0.0.141 Jul 9 10:11:47.066134 ignition[700]: Ignition finished successfully Jul 9 10:11:47.067615 systemd-resolved[287]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jul 9 10:11:47.067905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 10:11:47.069825 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 10:11:47.070611 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 10:11:47.096296 ignition[812]: Ignition 2.21.0 Jul 9 10:11:47.096315 ignition[812]: Stage: kargs Jul 9 10:11:47.096474 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:47.096483 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:47.098727 ignition[812]: kargs: kargs passed Jul 9 10:11:47.098780 ignition[812]: Ignition finished successfully Jul 9 10:11:47.103783 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 10:11:47.106581 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 10:11:47.133905 ignition[821]: Ignition 2.21.0 Jul 9 10:11:47.133917 ignition[821]: Stage: disks Jul 9 10:11:47.134071 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:47.134079 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:47.138138 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 10:11:47.135495 ignition[821]: disks: disks passed Jul 9 10:11:47.139443 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 10:11:47.135542 ignition[821]: Ignition finished successfully Jul 9 10:11:47.141167 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 10:11:47.142795 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 10:11:47.144584 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 10:11:47.146145 systemd[1]: Reached target basic.target - Basic System. Jul 9 10:11:47.148824 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 10:11:47.183421 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 10:11:47.188256 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 10:11:47.192651 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 10:11:47.262061 kernel: EXT4-fs (vda9): mounted filesystem 83f4d40b-59ad-4dad-9ca3-9ab67909ff35 r/w with ordered data mode. Quota mode: none. Jul 9 10:11:47.262570 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 10:11:47.263838 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 10:11:47.266181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 10:11:47.268389 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 10:11:47.269405 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 10:11:47.269446 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 10:11:47.269481 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 10:11:47.287947 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 10:11:47.290456 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 10:11:47.293726 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 9 10:11:47.295966 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:47.295991 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 10:11:47.296008 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 10:11:47.299628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 10:11:47.336679 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 10:11:47.340636 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Jul 9 10:11:47.344739 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 10:11:47.348345 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 10:11:47.417361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 10:11:47.419358 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 10:11:47.420908 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 10:11:47.438982 kernel: BTRFS info (device vda6): last unmount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:47.453544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 10:11:47.455411 ignition[955]: INFO : Ignition 2.21.0 Jul 9 10:11:47.455411 ignition[955]: INFO : Stage: mount Jul 9 10:11:47.457482 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:47.457482 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:47.459577 ignition[955]: INFO : mount: mount passed Jul 9 10:11:47.459577 ignition[955]: INFO : Ignition finished successfully Jul 9 10:11:47.460264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 10:11:47.462091 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 10:11:47.820962 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 10:11:47.822415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 10:11:47.847630 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (967) Jul 9 10:11:47.847667 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:47.847677 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 10:11:47.848582 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 10:11:47.851630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 10:11:47.885009 ignition[985]: INFO : Ignition 2.21.0 Jul 9 10:11:47.885009 ignition[985]: INFO : Stage: files Jul 9 10:11:47.886650 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:47.886650 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:47.888871 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Jul 9 10:11:47.888871 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 10:11:47.888871 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 10:11:47.892762 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 10:11:47.892762 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 10:11:47.892762 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 10:11:47.892131 unknown[985]: wrote ssh authorized keys file for user: core Jul 9 10:11:47.897791 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 10:11:47.897791 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 9 10:11:47.941216 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 10:11:48.055797 systemd-networkd[798]: eth0: Gained IPv6LL Jul 9 10:11:48.065117 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 10:11:48.067032 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 9 10:11:48.067032 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 10:11:48.067032 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 10:11:48.067032 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 10:11:48.067032 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 10:11:48.067032 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 10:11:48.067032 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 10:11:48.067032 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 10:11:48.080151 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 10:11:48.080151 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 10:11:48.080151 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 10:11:48.080151 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 10:11:48.080151 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 10:11:48.080151 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 9 10:11:48.584812 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 9 10:11:48.785566 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 10:11:48.785566 ignition[985]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 9 10:11:48.789264 ignition[985]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 10:11:48.791141 ignition[985]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 10:11:48.791141 ignition[985]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 9 10:11:48.791141 ignition[985]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 9 10:11:48.791141 ignition[985]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 10:11:48.791141 ignition[985]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 10:11:48.791141 ignition[985]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 9 10:11:48.791141 ignition[985]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 10:11:48.816286 ignition[985]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 10:11:48.820414 ignition[985]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 10:11:48.822890 ignition[985]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 10:11:48.822890 ignition[985]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 9 10:11:48.822890 ignition[985]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 10:11:48.822890 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 10:11:48.822890 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 10:11:48.822890 ignition[985]: INFO : files: files passed Jul 9 10:11:48.822890 ignition[985]: INFO : Ignition finished successfully Jul 9 10:11:48.823523 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 10:11:48.826702 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 10:11:48.828830 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 10:11:48.844011 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 10:11:48.844119 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 10:11:48.847525 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 10:11:48.849007 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:11:48.849007 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:11:48.852327 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:11:48.850744 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 10:11:48.853860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 10:11:48.858099 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 10:11:48.891821 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 10:11:48.891940 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 10:11:48.894266 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 10:11:48.896234 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 10:11:48.898214 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 10:11:48.899034 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 10:11:48.922654 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 10:11:48.925100 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 10:11:48.941515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:11:48.942882 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:11:48.944964 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 10:11:48.946761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 10:11:48.946891 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 10:11:48.949401 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 10:11:48.951470 systemd[1]: Stopped target basic.target - Basic System. Jul 9 10:11:48.953145 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 10:11:48.954828 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 10:11:48.956819 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 10:11:48.958877 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 10:11:48.960887 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 10:11:48.962787 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 10:11:48.964881 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 10:11:48.966962 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 10:11:48.968702 systemd[1]: Stopped target swap.target - Swaps. Jul 9 10:11:48.970247 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 10:11:48.970375 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 10:11:48.972693 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:11:48.974752 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:11:48.976734 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 10:11:48.976878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:11:48.978947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 10:11:48.979069 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 10:11:48.981828 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 10:11:48.981934 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 10:11:48.983919 systemd[1]: Stopped target paths.target - Path Units. Jul 9 10:11:48.985476 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 10:11:48.985655 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:11:48.987591 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 10:11:48.989419 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 10:11:48.991060 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 10:11:48.991146 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 10:11:48.992855 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 10:11:48.992930 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 10:11:48.995115 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 10:11:48.995237 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 10:11:48.997029 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 10:11:48.997128 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 10:11:48.999588 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 10:11:49.004057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 10:11:49.004184 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:11:49.007345 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 10:11:49.009078 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 10:11:49.009202 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:11:49.011412 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 10:11:49.011512 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 10:11:49.018557 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 10:11:49.018641 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 10:11:49.023980 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 10:11:49.030072 ignition[1039]: INFO : Ignition 2.21.0 Jul 9 10:11:49.030072 ignition[1039]: INFO : Stage: umount Jul 9 10:11:49.031729 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:49.031729 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:49.031729 ignition[1039]: INFO : umount: umount passed Jul 9 10:11:49.031729 ignition[1039]: INFO : Ignition finished successfully Jul 9 10:11:49.032545 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 10:11:49.032672 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 10:11:49.033913 systemd[1]: Stopped target network.target - Network. Jul 9 10:11:49.035791 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 10:11:49.035857 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 10:11:49.037581 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 10:11:49.037628 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 10:11:49.039315 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 10:11:49.039365 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 10:11:49.041253 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 10:11:49.041293 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 10:11:49.043428 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 10:11:49.045285 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 10:11:49.053062 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 10:11:49.053160 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 10:11:49.056372 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 10:11:49.056577 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 10:11:49.056609 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:11:49.059519 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 10:11:49.062436 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 10:11:49.062542 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 10:11:49.065837 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 10:11:49.066034 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 10:11:49.067551 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 10:11:49.067583 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:11:49.070643 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 10:11:49.071627 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 10:11:49.071686 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 10:11:49.073975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 10:11:49.074030 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:11:49.081639 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 10:11:49.081684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 10:11:49.083846 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:11:49.088333 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 10:11:49.091117 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 10:11:49.091227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 10:11:49.095810 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 10:11:49.095891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 10:11:49.099858 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 10:11:49.104863 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:11:49.106561 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 10:11:49.106596 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 10:11:49.108555 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 10:11:49.108586 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:11:49.110361 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 10:11:49.110417 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 10:11:49.113151 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 10:11:49.113208 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 10:11:49.115770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 10:11:49.115827 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 10:11:49.119582 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 10:11:49.120803 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 10:11:49.120885 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 10:11:49.124293 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 10:11:49.124338 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:11:49.127820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 10:11:49.127873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:49.132137 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 10:11:49.133853 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 10:11:49.139883 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 10:11:49.140004 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 10:11:49.142464 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 10:11:49.145278 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 10:11:49.164763 systemd[1]: Switching root. Jul 9 10:11:49.189072 systemd-journald[244]: Journal stopped Jul 9 10:11:49.954675 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 9 10:11:49.954748 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 10:11:49.954761 kernel: SELinux: policy capability open_perms=1 Jul 9 10:11:49.954774 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 10:11:49.954783 kernel: SELinux: policy capability always_check_network=0 Jul 9 10:11:49.954794 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 10:11:49.954805 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 10:11:49.954813 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 10:11:49.954822 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 10:11:49.954831 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 10:11:49.954840 kernel: audit: type=1403 audit(1752055909.379:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 10:11:49.954855 systemd[1]: Successfully loaded SELinux policy in 61.010ms. Jul 9 10:11:49.954871 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.043ms. Jul 9 10:11:49.954881 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 10:11:49.954892 systemd[1]: Detected virtualization kvm. Jul 9 10:11:49.954901 systemd[1]: Detected architecture arm64. Jul 9 10:11:49.954911 systemd[1]: Detected first boot. Jul 9 10:11:49.954921 systemd[1]: Initializing machine ID from VM UUID. Jul 9 10:11:49.954931 zram_generator::config[1084]: No configuration found. Jul 9 10:11:49.954941 kernel: NET: Registered PF_VSOCK protocol family Jul 9 10:11:49.954951 systemd[1]: Populated /etc with preset unit settings. Jul 9 10:11:49.954962 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 10:11:49.954972 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 10:11:49.954982 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 10:11:49.954998 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 10:11:49.955009 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 10:11:49.955019 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 10:11:49.955029 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 10:11:49.955040 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 10:11:49.955050 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 10:11:49.955060 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 10:11:49.955071 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 10:11:49.955080 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 10:11:49.955090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:11:49.955141 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:11:49.955157 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 10:11:49.955174 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 10:11:49.955187 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 10:11:49.955198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 10:11:49.955209 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 10:11:49.955219 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:11:49.955230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:11:49.955241 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 10:11:49.955251 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 10:11:49.955262 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 10:11:49.955272 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 10:11:49.955282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:11:49.955292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 10:11:49.955302 systemd[1]: Reached target slices.target - Slice Units. Jul 9 10:11:49.955312 systemd[1]: Reached target swap.target - Swaps. Jul 9 10:11:49.955322 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 10:11:49.955332 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 10:11:49.955342 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 10:11:49.955352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:11:49.955364 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 10:11:49.955374 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:11:49.955389 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 10:11:49.955400 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 10:11:49.955412 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 10:11:49.955422 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 10:11:49.955432 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 10:11:49.955441 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 10:11:49.955451 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 10:11:49.955462 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 10:11:49.955473 systemd[1]: Reached target machines.target - Containers. Jul 9 10:11:49.955483 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 10:11:49.955493 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:11:49.955503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 10:11:49.955513 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 10:11:49.955523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:11:49.955533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 10:11:49.955544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:11:49.955555 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 10:11:49.955565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:11:49.955575 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 10:11:49.955585 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 10:11:49.955596 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 10:11:49.955606 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 10:11:49.955616 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 10:11:49.955628 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:11:49.955638 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 10:11:49.955648 kernel: loop: module loaded Jul 9 10:11:49.955658 kernel: ACPI: bus type drm_connector registered Jul 9 10:11:49.955667 kernel: fuse: init (API version 7.41) Jul 9 10:11:49.955679 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 10:11:49.955689 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 10:11:49.955699 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 10:11:49.955720 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 10:11:49.955733 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 10:11:49.955743 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 10:11:49.955753 systemd[1]: Stopped verity-setup.service. Jul 9 10:11:49.955826 systemd-journald[1152]: Collecting audit messages is disabled. Jul 9 10:11:49.955856 systemd-journald[1152]: Journal started Jul 9 10:11:49.955877 systemd-journald[1152]: Runtime Journal (/run/log/journal/df5bc630858c4511bd4e35f825adcea5) is 6M, max 48.5M, 42.4M free. Jul 9 10:11:49.969790 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 10:11:49.969845 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 10:11:49.969858 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 10:11:49.969876 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 10:11:49.969889 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 10:11:49.969900 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 10:11:49.738367 systemd[1]: Queued start job for default target multi-user.target. Jul 9 10:11:49.757844 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 10:11:49.758246 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 10:11:49.972198 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 10:11:49.974294 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 10:11:49.975183 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:11:49.976677 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 10:11:49.976879 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 10:11:49.978329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:11:49.978488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:11:49.979876 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 10:11:49.980050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 10:11:49.981393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:11:49.981550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:11:49.983138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 10:11:49.983284 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 10:11:49.984665 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:11:49.984843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:11:49.986276 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 10:11:49.987744 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 10:11:49.989273 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 10:11:49.991976 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 10:11:50.003273 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 10:11:50.005636 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 10:11:50.007743 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 10:11:50.008961 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 10:11:50.009008 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 10:11:50.010829 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 10:11:50.016150 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 10:11:50.017282 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:11:50.018513 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 10:11:50.020550 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 10:11:50.021847 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 10:11:50.025848 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 10:11:50.027187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 10:11:50.028438 systemd-journald[1152]: Time spent on flushing to /var/log/journal/df5bc630858c4511bd4e35f825adcea5 is 27.420ms for 882 entries. Jul 9 10:11:50.028438 systemd-journald[1152]: System Journal (/var/log/journal/df5bc630858c4511bd4e35f825adcea5) is 8M, max 195.6M, 187.6M free. Jul 9 10:11:50.074531 systemd-journald[1152]: Received client request to flush runtime journal. Jul 9 10:11:50.074579 kernel: loop0: detected capacity change from 0 to 211168 Jul 9 10:11:50.028948 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:11:50.031928 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 10:11:50.035765 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 10:11:50.043146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:11:50.044595 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 10:11:50.047824 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 10:11:50.050170 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 10:11:50.054417 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 10:11:50.059246 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 10:11:50.061342 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:11:50.076119 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 10:11:50.082824 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 10:11:50.089461 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 10:11:50.094953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 10:11:50.097846 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 10:11:50.098677 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 10:11:50.106742 kernel: loop1: detected capacity change from 0 to 105936 Jul 9 10:11:50.128740 kernel: loop2: detected capacity change from 0 to 134232 Jul 9 10:11:50.140900 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jul 9 10:11:50.140917 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jul 9 10:11:50.144655 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:11:50.165735 kernel: loop3: detected capacity change from 0 to 211168 Jul 9 10:11:50.174739 kernel: loop4: detected capacity change from 0 to 105936 Jul 9 10:11:50.180727 kernel: loop5: detected capacity change from 0 to 134232 Jul 9 10:11:50.186364 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 10:11:50.186849 (sd-merge)[1225]: Merged extensions into '/usr'. Jul 9 10:11:50.193188 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 10:11:50.193207 systemd[1]: Reloading... Jul 9 10:11:50.245254 zram_generator::config[1247]: No configuration found. Jul 9 10:11:50.328025 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 10:11:50.330959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:11:50.393738 systemd[1]: Reloading finished in 200 ms. Jul 9 10:11:50.425796 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 10:11:50.428490 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 10:11:50.443071 systemd[1]: Starting ensure-sysext.service... Jul 9 10:11:50.444928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 10:11:50.460006 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 10:11:50.460037 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 10:11:50.460293 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 10:11:50.460488 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 10:11:50.461168 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 10:11:50.461387 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 9 10:11:50.461428 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 9 10:11:50.464340 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 10:11:50.464354 systemd-tmpfiles[1286]: Skipping /boot Jul 9 10:11:50.470542 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 10:11:50.470561 systemd-tmpfiles[1286]: Skipping /boot Jul 9 10:11:50.471944 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jul 9 10:11:50.471964 systemd[1]: Reloading... Jul 9 10:11:50.518791 zram_generator::config[1310]: No configuration found. Jul 9 10:11:50.591947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:11:50.655058 systemd[1]: Reloading finished in 182 ms. Jul 9 10:11:50.679585 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 10:11:50.685421 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:11:50.695895 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 10:11:50.698338 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 10:11:50.700802 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 10:11:50.703789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 10:11:50.708006 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:11:50.712842 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 10:11:50.721520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:11:50.724941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:11:50.727123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:11:50.740036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:11:50.741300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:11:50.741413 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:11:50.742897 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 10:11:50.746403 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 10:11:50.748375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:11:50.748526 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:11:50.751282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:11:50.758903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:11:50.759736 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jul 9 10:11:50.762179 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:11:50.762337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:11:50.766639 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 10:11:50.773510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:11:50.774995 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:11:50.778890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:11:50.783856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:11:50.784974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:11:50.785105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:11:50.786435 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 10:11:50.787971 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 10:11:50.788713 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:11:50.792896 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 10:11:50.804043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:11:50.806650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:11:50.810974 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 10:11:50.815583 augenrules[1398]: No rules Jul 9 10:11:50.824538 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 10:11:50.824786 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 10:11:50.827508 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:11:50.827667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:11:50.834397 systemd[1]: Finished ensure-sysext.service. Jul 9 10:11:50.841449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:11:50.841686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:11:50.859375 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 10:11:50.875360 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 10:11:50.887372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:11:50.888508 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 10:11:50.890896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:11:50.890949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:11:50.892946 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 10:11:50.894907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 10:11:50.895024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 10:11:50.898891 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 10:11:50.900194 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 10:11:50.909370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 10:11:50.914076 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 10:11:50.925157 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 10:11:50.926726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 10:11:50.942047 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 10:11:50.982411 systemd-resolved[1352]: Positive Trust Anchors: Jul 9 10:11:50.982430 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 10:11:50.982464 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 10:11:50.983128 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 10:11:50.984649 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 10:11:50.994731 systemd-resolved[1352]: Defaulting to hostname 'linux'. Jul 9 10:11:51.001072 systemd-networkd[1439]: lo: Link UP Jul 9 10:11:51.001084 systemd-networkd[1439]: lo: Gained carrier Jul 9 10:11:51.002016 systemd-networkd[1439]: Enumeration completed Jul 9 10:11:51.002116 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 10:11:51.002414 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:11:51.002418 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 10:11:51.003599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 10:11:51.005576 systemd[1]: Reached target network.target - Network. Jul 9 10:11:51.005601 systemd-networkd[1439]: eth0: Link UP Jul 9 10:11:51.006582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:11:51.007837 systemd-networkd[1439]: eth0: Gained carrier Jul 9 10:11:51.007855 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:11:51.008420 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 10:11:51.010103 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 10:11:51.011892 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 10:11:51.013468 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 10:11:51.016882 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 10:11:51.018160 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 10:11:51.019429 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 10:11:51.019460 systemd[1]: Reached target paths.target - Path Units. Jul 9 10:11:51.020458 systemd[1]: Reached target timers.target - Timer Units. Jul 9 10:11:51.022347 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 10:11:51.024677 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 10:11:51.029009 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 10:11:51.031963 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 10:11:51.033482 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 10:11:51.042413 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 10:11:51.044273 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 10:11:51.047769 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 10:11:51.052835 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 10:11:51.054905 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 10:11:51.056978 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 10:11:51.062087 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 10:11:51.063302 systemd[1]: Reached target basic.target - Basic System. Jul 9 10:11:51.064005 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Jul 9 10:11:51.064478 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 10:11:51.064569 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 10:11:51.065143 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 10:11:51.065251 systemd-timesyncd[1440]: Initial clock synchronization to Wed 2025-07-09 10:11:51.092610 UTC. Jul 9 10:11:51.068479 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 10:11:51.070674 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 10:11:51.074032 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 10:11:51.076374 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 10:11:51.081239 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 10:11:51.082361 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 10:11:51.083726 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 10:11:51.084293 jq[1469]: false Jul 9 10:11:51.087929 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 10:11:51.090935 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 10:11:51.094082 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 10:11:51.100125 extend-filesystems[1470]: Found /dev/vda6 Jul 9 10:11:51.102903 extend-filesystems[1470]: Found /dev/vda9 Jul 9 10:11:51.105786 extend-filesystems[1470]: Checking size of /dev/vda9 Jul 9 10:11:51.105817 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 10:11:51.108950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:11:51.111018 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 10:11:51.111555 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 10:11:51.112771 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 10:11:51.115997 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 10:11:51.118059 extend-filesystems[1470]: Resized partition /dev/vda9 Jul 9 10:11:51.121018 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 10:11:51.122640 extend-filesystems[1495]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 10:11:51.126076 jq[1494]: true Jul 9 10:11:51.126851 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 10:11:51.129183 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 10:11:51.129467 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 10:11:51.129870 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 10:11:51.130121 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 10:11:51.135361 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 10:11:51.137877 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 10:11:51.135980 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 10:11:51.178099 jq[1500]: true Jul 9 10:11:51.178802 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 10:11:51.183446 update_engine[1491]: I20250709 10:11:51.180267 1491 main.cc:92] Flatcar Update Engine starting Jul 9 10:11:51.208744 tar[1499]: linux-arm64/LICENSE Jul 9 10:11:51.208978 tar[1499]: linux-arm64/helm Jul 9 10:11:51.212712 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 10:11:51.221570 dbus-daemon[1467]: [system] SELinux support is enabled Jul 9 10:11:51.221803 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 10:11:51.226934 update_engine[1491]: I20250709 10:11:51.225741 1491 update_check_scheduler.cc:74] Next update check in 5m38s Jul 9 10:11:51.225852 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 10:11:51.225878 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 10:11:51.227354 extend-filesystems[1495]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 10:11:51.227354 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 10:11:51.227354 extend-filesystems[1495]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 10:11:51.243271 extend-filesystems[1470]: Resized filesystem in /dev/vda9 Jul 9 10:11:51.227783 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 10:11:51.227802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 10:11:51.231321 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 10:11:51.234026 systemd-logind[1485]: New seat seat0. Jul 9 10:11:51.236436 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 10:11:51.236663 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 10:11:51.256563 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Jul 9 10:11:51.256838 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 10:11:51.259736 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 10:11:51.265213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:51.267520 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 10:11:51.268773 dbus-daemon[1467]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 9 10:11:51.269015 systemd[1]: Started update-engine.service - Update Engine. Jul 9 10:11:51.274018 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 10:11:51.336834 locksmithd[1539]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 10:11:51.454208 containerd[1513]: time="2025-07-09T10:11:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 10:11:51.455503 containerd[1513]: time="2025-07-09T10:11:51.455444880Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 10:11:51.470739 containerd[1513]: time="2025-07-09T10:11:51.470148200Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.52µs" Jul 9 10:11:51.470739 containerd[1513]: time="2025-07-09T10:11:51.470231680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 10:11:51.470739 containerd[1513]: time="2025-07-09T10:11:51.470254280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 10:11:51.470739 containerd[1513]: time="2025-07-09T10:11:51.470424040Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 10:11:51.470739 containerd[1513]: time="2025-07-09T10:11:51.470444080Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 10:11:51.470739 containerd[1513]: time="2025-07-09T10:11:51.470468680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 10:11:51.470739 containerd[1513]: time="2025-07-09T10:11:51.470523760Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 10:11:51.470739 containerd[1513]: time="2025-07-09T10:11:51.470534520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.471154 containerd[1513]: time="2025-07-09T10:11:51.471126440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.471280 containerd[1513]: time="2025-07-09T10:11:51.471263040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 10:11:51.471335 containerd[1513]: time="2025-07-09T10:11:51.471321920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 10:11:51.471427 containerd[1513]: time="2025-07-09T10:11:51.471368080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 10:11:51.471640 containerd[1513]: time="2025-07-09T10:11:51.471619640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.472161 containerd[1513]: time="2025-07-09T10:11:51.472137960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.472306 containerd[1513]: time="2025-07-09T10:11:51.472288040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.472362 containerd[1513]: time="2025-07-09T10:11:51.472350440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 10:11:51.472530 containerd[1513]: time="2025-07-09T10:11:51.472512880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 10:11:51.473094 containerd[1513]: time="2025-07-09T10:11:51.473064040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 10:11:51.473286 containerd[1513]: time="2025-07-09T10:11:51.473266120Z" level=info msg="metadata content store policy set" policy=shared Jul 9 10:11:51.477426 containerd[1513]: time="2025-07-09T10:11:51.477394960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 10:11:51.477651 containerd[1513]: time="2025-07-09T10:11:51.477576160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 10:11:51.477651 containerd[1513]: time="2025-07-09T10:11:51.477600960Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 10:11:51.477651 containerd[1513]: time="2025-07-09T10:11:51.477613960Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 10:11:51.477651 containerd[1513]: time="2025-07-09T10:11:51.477626240Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 10:11:51.477949 containerd[1513]: time="2025-07-09T10:11:51.477893200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 10:11:51.477949 containerd[1513]: time="2025-07-09T10:11:51.477921840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 10:11:51.478115 containerd[1513]: time="2025-07-09T10:11:51.478096600Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 10:11:51.478201 containerd[1513]: time="2025-07-09T10:11:51.478186720Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478317680Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478341440Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478355440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478484280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478505960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478523200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478533840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478543880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478554760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478565440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478576880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478590760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478601680Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478614600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 10:11:51.478867 containerd[1513]: time="2025-07-09T10:11:51.478818480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 10:11:51.479144 containerd[1513]: time="2025-07-09T10:11:51.478834480Z" level=info msg="Start snapshots syncer" Jul 9 10:11:51.479505 containerd[1513]: time="2025-07-09T10:11:51.479475000Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 10:11:51.481367 containerd[1513]: time="2025-07-09T10:11:51.481306400Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 10:11:51.481493 containerd[1513]: time="2025-07-09T10:11:51.481384800Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 10:11:51.482418 containerd[1513]: time="2025-07-09T10:11:51.482388400Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 10:11:51.482664 containerd[1513]: time="2025-07-09T10:11:51.482641720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 10:11:51.482903 containerd[1513]: time="2025-07-09T10:11:51.482882400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 10:11:51.483026 containerd[1513]: time="2025-07-09T10:11:51.483007320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 10:11:51.483089 containerd[1513]: time="2025-07-09T10:11:51.483069920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 10:11:51.483229 containerd[1513]: time="2025-07-09T10:11:51.483213360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 10:11:51.483297 containerd[1513]: time="2025-07-09T10:11:51.483283480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 10:11:51.483387 containerd[1513]: time="2025-07-09T10:11:51.483372280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 10:11:51.483517 containerd[1513]: time="2025-07-09T10:11:51.483495400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 10:11:51.483633 containerd[1513]: time="2025-07-09T10:11:51.483614720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 10:11:51.483697 containerd[1513]: time="2025-07-09T10:11:51.483685400Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 10:11:51.483901 containerd[1513]: time="2025-07-09T10:11:51.483878320Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484028640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484047440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484057880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484067280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484077480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484087920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484175040Z" level=info msg="runtime interface created" Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484180960Z" level=info msg="created NRI interface" Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484194000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484206680Z" level=info msg="Connect containerd service" Jul 9 10:11:51.484269 containerd[1513]: time="2025-07-09T10:11:51.484236440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 10:11:51.486023 containerd[1513]: time="2025-07-09T10:11:51.485981800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 10:11:51.492551 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 10:11:51.503551 tar[1499]: linux-arm64/README.md Jul 9 10:11:51.518874 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 10:11:51.521562 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 10:11:51.525581 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 10:11:51.535857 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 10:11:51.537817 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 10:11:51.541357 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 10:11:51.568814 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 10:11:51.571456 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 10:11:51.573629 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 10:11:51.575061 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 10:11:51.614247 containerd[1513]: time="2025-07-09T10:11:51.614148000Z" level=info msg="Start subscribing containerd event" Jul 9 10:11:51.614247 containerd[1513]: time="2025-07-09T10:11:51.614243920Z" level=info msg="Start recovering state" Jul 9 10:11:51.614369 containerd[1513]: time="2025-07-09T10:11:51.614334720Z" level=info msg="Start event monitor" Jul 9 10:11:51.614369 containerd[1513]: time="2025-07-09T10:11:51.614351160Z" level=info msg="Start cni network conf syncer for default" Jul 9 10:11:51.614369 containerd[1513]: time="2025-07-09T10:11:51.614359600Z" level=info msg="Start streaming server" Jul 9 10:11:51.614369 containerd[1513]: time="2025-07-09T10:11:51.614368840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 10:11:51.614463 containerd[1513]: time="2025-07-09T10:11:51.614376040Z" level=info msg="runtime interface starting up..." Jul 9 10:11:51.614463 containerd[1513]: time="2025-07-09T10:11:51.614382080Z" level=info msg="starting plugins..." Jul 9 10:11:51.614463 containerd[1513]: time="2025-07-09T10:11:51.614406640Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 10:11:51.614509 containerd[1513]: time="2025-07-09T10:11:51.614457480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 10:11:51.614509 containerd[1513]: time="2025-07-09T10:11:51.614504280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 10:11:51.614655 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 10:11:51.615957 containerd[1513]: time="2025-07-09T10:11:51.615923640Z" level=info msg="containerd successfully booted in 0.162163s" Jul 9 10:11:52.343863 systemd-networkd[1439]: eth0: Gained IPv6LL Jul 9 10:11:52.346337 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 10:11:52.348161 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 10:11:52.352160 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 10:11:52.354550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:11:52.372126 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 10:11:52.387894 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 10:11:52.388126 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 10:11:52.389748 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 10:11:52.394778 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 10:11:52.928016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:11:52.929528 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 10:11:52.932340 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 10:11:52.934818 systemd[1]: Startup finished in 2.133s (kernel) + 4.715s (initrd) + 3.628s (userspace) = 10.478s. Jul 9 10:11:53.371830 kubelet[1608]: E0709 10:11:53.371722 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 10:11:53.373999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 10:11:53.374141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 10:11:53.374412 systemd[1]: kubelet.service: Consumed 798ms CPU time, 255.6M memory peak. Jul 9 10:11:57.896044 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 10:11:57.897039 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:60612.service - OpenSSH per-connection server daemon (10.0.0.1:60612). Jul 9 10:11:57.978411 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 60612 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:57.979929 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:57.985793 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 10:11:57.986640 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 10:11:57.991595 systemd-logind[1485]: New session 1 of user core. Jul 9 10:11:58.011819 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 10:11:58.014645 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 10:11:58.026613 (systemd)[1627]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 10:11:58.028843 systemd-logind[1485]: New session c1 of user core. Jul 9 10:11:58.130678 systemd[1627]: Queued start job for default target default.target. Jul 9 10:11:58.148615 systemd[1627]: Created slice app.slice - User Application Slice. Jul 9 10:11:58.148644 systemd[1627]: Reached target paths.target - Paths. Jul 9 10:11:58.148680 systemd[1627]: Reached target timers.target - Timers. Jul 9 10:11:58.149847 systemd[1627]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 10:11:58.158557 systemd[1627]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 10:11:58.158614 systemd[1627]: Reached target sockets.target - Sockets. Jul 9 10:11:58.158650 systemd[1627]: Reached target basic.target - Basic System. Jul 9 10:11:58.158677 systemd[1627]: Reached target default.target - Main User Target. Jul 9 10:11:58.158701 systemd[1627]: Startup finished in 123ms. Jul 9 10:11:58.158824 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 10:11:58.160183 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 10:11:58.225493 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:60620.service - OpenSSH per-connection server daemon (10.0.0.1:60620). Jul 9 10:11:58.280471 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 60620 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:58.281564 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:58.285781 systemd-logind[1485]: New session 2 of user core. Jul 9 10:11:58.291852 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 10:11:58.341818 sshd[1641]: Connection closed by 10.0.0.1 port 60620 Jul 9 10:11:58.342252 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:58.352508 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:60620.service: Deactivated successfully. Jul 9 10:11:58.354921 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 10:11:58.355514 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Jul 9 10:11:58.357940 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:60628.service - OpenSSH per-connection server daemon (10.0.0.1:60628). Jul 9 10:11:58.358799 systemd-logind[1485]: Removed session 2. Jul 9 10:11:58.418549 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 60628 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:58.419697 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:58.423848 systemd-logind[1485]: New session 3 of user core. Jul 9 10:11:58.440851 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 10:11:58.488312 sshd[1650]: Connection closed by 10.0.0.1 port 60628 Jul 9 10:11:58.488932 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:58.498558 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:60628.service: Deactivated successfully. Jul 9 10:11:58.499915 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 10:11:58.500525 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Jul 9 10:11:58.502756 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:60632.service - OpenSSH per-connection server daemon (10.0.0.1:60632). Jul 9 10:11:58.503630 systemd-logind[1485]: Removed session 3. Jul 9 10:11:58.551584 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 60632 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:58.552662 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:58.556767 systemd-logind[1485]: New session 4 of user core. Jul 9 10:11:58.568869 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 10:11:58.621211 sshd[1659]: Connection closed by 10.0.0.1 port 60632 Jul 9 10:11:58.621648 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:58.638751 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:60632.service: Deactivated successfully. Jul 9 10:11:58.642041 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 10:11:58.643493 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Jul 9 10:11:58.644871 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:60648.service - OpenSSH per-connection server daemon (10.0.0.1:60648). Jul 9 10:11:58.645837 systemd-logind[1485]: Removed session 4. Jul 9 10:11:58.698000 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 60648 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:58.699380 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:58.703785 systemd-logind[1485]: New session 5 of user core. Jul 9 10:11:58.713891 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 10:11:58.776495 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 10:11:58.776800 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:11:58.790524 sudo[1669]: pam_unix(sudo:session): session closed for user root Jul 9 10:11:58.792389 sshd[1668]: Connection closed by 10.0.0.1 port 60648 Jul 9 10:11:58.792295 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:58.806756 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:60648.service: Deactivated successfully. Jul 9 10:11:58.808143 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 10:11:58.808792 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Jul 9 10:11:58.810692 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:60658.service - OpenSSH per-connection server daemon (10.0.0.1:60658). Jul 9 10:11:58.811912 systemd-logind[1485]: Removed session 5. Jul 9 10:11:58.871219 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 60658 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:58.872364 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:58.876849 systemd-logind[1485]: New session 6 of user core. Jul 9 10:11:58.889863 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 10:11:58.940733 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 10:11:58.941263 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:11:59.018841 sudo[1680]: pam_unix(sudo:session): session closed for user root Jul 9 10:11:59.024622 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 10:11:59.024887 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:11:59.032889 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 10:11:59.071369 augenrules[1702]: No rules Jul 9 10:11:59.072080 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 10:11:59.072281 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 10:11:59.073256 sudo[1679]: pam_unix(sudo:session): session closed for user root Jul 9 10:11:59.074828 sshd[1678]: Connection closed by 10.0.0.1 port 60658 Jul 9 10:11:59.075270 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:59.086841 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:60658.service: Deactivated successfully. Jul 9 10:11:59.089468 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 10:11:59.090772 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Jul 9 10:11:59.093465 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:60672.service - OpenSSH per-connection server daemon (10.0.0.1:60672). Jul 9 10:11:59.094475 systemd-logind[1485]: Removed session 6. Jul 9 10:11:59.158268 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 60672 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:59.159466 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:59.164069 systemd-logind[1485]: New session 7 of user core. Jul 9 10:11:59.174902 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 10:11:59.225490 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 10:11:59.225790 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:11:59.585405 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 10:11:59.602008 (dockerd)[1735]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 10:11:59.887469 dockerd[1735]: time="2025-07-09T10:11:59.887351522Z" level=info msg="Starting up" Jul 9 10:11:59.890397 dockerd[1735]: time="2025-07-09T10:11:59.890366998Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 10:11:59.899397 dockerd[1735]: time="2025-07-09T10:11:59.899368837Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 9 10:11:59.912782 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2606245711-merged.mount: Deactivated successfully. Jul 9 10:11:59.933984 dockerd[1735]: time="2025-07-09T10:11:59.933943406Z" level=info msg="Loading containers: start." Jul 9 10:11:59.943703 kernel: Initializing XFRM netlink socket Jul 9 10:12:00.145694 systemd-networkd[1439]: docker0: Link UP Jul 9 10:12:00.150971 dockerd[1735]: time="2025-07-09T10:12:00.150928476Z" level=info msg="Loading containers: done." Jul 9 10:12:00.166485 dockerd[1735]: time="2025-07-09T10:12:00.166438463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 10:12:00.166604 dockerd[1735]: time="2025-07-09T10:12:00.166512393Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 9 10:12:00.166604 dockerd[1735]: time="2025-07-09T10:12:00.166589404Z" level=info msg="Initializing buildkit" Jul 9 10:12:00.193173 dockerd[1735]: time="2025-07-09T10:12:00.193106814Z" level=info msg="Completed buildkit initialization" Jul 9 10:12:00.197743 dockerd[1735]: time="2025-07-09T10:12:00.197703841Z" level=info msg="Daemon has completed initialization" Jul 9 10:12:00.197883 dockerd[1735]: time="2025-07-09T10:12:00.197764161Z" level=info msg="API listen on /run/docker.sock" Jul 9 10:12:00.197912 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 10:12:00.686408 containerd[1513]: time="2025-07-09T10:12:00.686332974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 9 10:12:00.910148 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1841312762-merged.mount: Deactivated successfully. Jul 9 10:12:01.447372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506635459.mount: Deactivated successfully. Jul 9 10:12:02.480065 containerd[1513]: time="2025-07-09T10:12:02.480019785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:02.481072 containerd[1513]: time="2025-07-09T10:12:02.481031899Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 9 10:12:02.481931 containerd[1513]: time="2025-07-09T10:12:02.481876868Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:02.484240 containerd[1513]: time="2025-07-09T10:12:02.484183632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:02.485951 containerd[1513]: time="2025-07-09T10:12:02.485920679Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.799542675s" Jul 9 10:12:02.485999 containerd[1513]: time="2025-07-09T10:12:02.485953940Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 9 10:12:02.489004 containerd[1513]: time="2025-07-09T10:12:02.488967067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 9 10:12:03.624500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 10:12:03.626821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:03.766516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:03.769785 containerd[1513]: time="2025-07-09T10:12:03.769741806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:03.770529 containerd[1513]: time="2025-07-09T10:12:03.770472049Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 9 10:12:03.770845 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 10:12:03.771818 containerd[1513]: time="2025-07-09T10:12:03.771762072Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:03.774567 containerd[1513]: time="2025-07-09T10:12:03.774150881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:03.775192 containerd[1513]: time="2025-07-09T10:12:03.775070478Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.286063667s" Jul 9 10:12:03.775192 containerd[1513]: time="2025-07-09T10:12:03.775105940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 9 10:12:03.775986 containerd[1513]: time="2025-07-09T10:12:03.775928319Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 9 10:12:03.805442 kubelet[2018]: E0709 10:12:03.805391 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 10:12:03.808769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 10:12:03.808987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 10:12:03.809557 systemd[1]: kubelet.service: Consumed 145ms CPU time, 106.1M memory peak. Jul 9 10:12:04.994242 containerd[1513]: time="2025-07-09T10:12:04.994044701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:04.995096 containerd[1513]: time="2025-07-09T10:12:04.994821558Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 9 10:12:04.995814 containerd[1513]: time="2025-07-09T10:12:04.995779080Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:04.998053 containerd[1513]: time="2025-07-09T10:12:04.998016835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:04.998976 containerd[1513]: time="2025-07-09T10:12:04.998947222Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.222910597s" Jul 9 10:12:04.999026 containerd[1513]: time="2025-07-09T10:12:04.998980121Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 9 10:12:04.999776 containerd[1513]: time="2025-07-09T10:12:04.999543252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 9 10:12:06.051451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72252867.mount: Deactivated successfully. Jul 9 10:12:06.283201 containerd[1513]: time="2025-07-09T10:12:06.283152365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:06.283812 containerd[1513]: time="2025-07-09T10:12:06.283790957Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 9 10:12:06.284647 containerd[1513]: time="2025-07-09T10:12:06.284594160Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:06.286639 containerd[1513]: time="2025-07-09T10:12:06.286584378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:06.287211 containerd[1513]: time="2025-07-09T10:12:06.287180266Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.287605115s" Jul 9 10:12:06.287279 containerd[1513]: time="2025-07-09T10:12:06.287212964Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 9 10:12:06.287683 containerd[1513]: time="2025-07-09T10:12:06.287657529Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 9 10:12:06.807755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262727741.mount: Deactivated successfully. Jul 9 10:12:07.672748 containerd[1513]: time="2025-07-09T10:12:07.672193655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:07.673073 containerd[1513]: time="2025-07-09T10:12:07.672870177Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 9 10:12:07.673575 containerd[1513]: time="2025-07-09T10:12:07.673547379Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:07.676662 containerd[1513]: time="2025-07-09T10:12:07.676615538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:07.677417 containerd[1513]: time="2025-07-09T10:12:07.677300864Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.389608715s" Jul 9 10:12:07.677417 containerd[1513]: time="2025-07-09T10:12:07.677335082Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 9 10:12:07.678008 containerd[1513]: time="2025-07-09T10:12:07.677985550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 10:12:08.095222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384221074.mount: Deactivated successfully. Jul 9 10:12:08.100663 containerd[1513]: time="2025-07-09T10:12:08.100604183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:12:08.101366 containerd[1513]: time="2025-07-09T10:12:08.101322875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 9 10:12:08.101974 containerd[1513]: time="2025-07-09T10:12:08.101940634Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:12:08.105374 containerd[1513]: time="2025-07-09T10:12:08.105294010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:12:08.106883 containerd[1513]: time="2025-07-09T10:12:08.106836288Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 428.818642ms" Jul 9 10:12:08.106883 containerd[1513]: time="2025-07-09T10:12:08.106876589Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 10:12:08.107666 containerd[1513]: time="2025-07-09T10:12:08.107625657Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 9 10:12:08.582748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634216837.mount: Deactivated successfully. Jul 9 10:12:10.083897 containerd[1513]: time="2025-07-09T10:12:10.083846900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:10.084328 containerd[1513]: time="2025-07-09T10:12:10.084294157Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 9 10:12:10.085136 containerd[1513]: time="2025-07-09T10:12:10.085103070Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:10.087669 containerd[1513]: time="2025-07-09T10:12:10.087621333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:10.089114 containerd[1513]: time="2025-07-09T10:12:10.089075440Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.981410843s" Jul 9 10:12:10.089114 containerd[1513]: time="2025-07-09T10:12:10.089111737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 9 10:12:14.059311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 10:12:14.061212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:14.206421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:14.210697 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 10:12:14.256112 kubelet[2180]: E0709 10:12:14.256071 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 10:12:14.258639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 10:12:14.258786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 10:12:14.259341 systemd[1]: kubelet.service: Consumed 140ms CPU time, 106.2M memory peak. Jul 9 10:12:15.406205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:15.406472 systemd[1]: kubelet.service: Consumed 140ms CPU time, 106.2M memory peak. Jul 9 10:12:15.408242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:15.434496 systemd[1]: Reload requested from client PID 2197 ('systemctl') (unit session-7.scope)... Jul 9 10:12:15.434597 systemd[1]: Reloading... Jul 9 10:12:15.502782 zram_generator::config[2239]: No configuration found. Jul 9 10:12:15.702884 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:12:15.788048 systemd[1]: Reloading finished in 353 ms. Jul 9 10:12:15.828350 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:15.830347 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 10:12:15.831752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:15.831799 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.2M memory peak. Jul 9 10:12:15.833097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:15.942019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:15.945664 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 10:12:15.976513 kubelet[2287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:12:15.976767 kubelet[2287]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 10:12:15.976767 kubelet[2287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:12:15.976868 kubelet[2287]: I0709 10:12:15.976832 2287 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 10:12:16.579713 kubelet[2287]: I0709 10:12:16.579662 2287 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 10:12:16.579713 kubelet[2287]: I0709 10:12:16.579695 2287 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 10:12:16.579961 kubelet[2287]: I0709 10:12:16.579932 2287 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 10:12:16.636745 kubelet[2287]: E0709 10:12:16.636684 2287 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 9 10:12:16.638062 kubelet[2287]: I0709 10:12:16.638037 2287 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 10:12:16.649043 kubelet[2287]: I0709 10:12:16.649012 2287 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 10:12:16.651587 kubelet[2287]: I0709 10:12:16.651553 2287 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 10:12:16.652271 kubelet[2287]: I0709 10:12:16.652226 2287 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 10:12:16.652423 kubelet[2287]: I0709 10:12:16.652263 2287 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 10:12:16.652506 kubelet[2287]: I0709 10:12:16.652479 2287 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 10:12:16.652506 kubelet[2287]: I0709 10:12:16.652487 2287 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 10:12:16.653192 kubelet[2287]: I0709 10:12:16.653159 2287 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:12:16.655590 kubelet[2287]: I0709 10:12:16.655561 2287 kubelet.go:480] "Attempting to sync node with API server" Jul 9 10:12:16.655590 kubelet[2287]: I0709 10:12:16.655587 2287 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 10:12:16.655640 kubelet[2287]: I0709 10:12:16.655611 2287 kubelet.go:386] "Adding apiserver pod source" Jul 9 10:12:16.656679 kubelet[2287]: I0709 10:12:16.656593 2287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 10:12:16.659342 kubelet[2287]: I0709 10:12:16.659316 2287 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 10:12:16.659406 kubelet[2287]: E0709 10:12:16.659332 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 10:12:16.659475 kubelet[2287]: E0709 10:12:16.659426 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 10:12:16.660332 kubelet[2287]: I0709 10:12:16.659998 2287 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 10:12:16.660332 kubelet[2287]: W0709 10:12:16.660113 2287 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 10:12:16.662249 kubelet[2287]: I0709 10:12:16.662231 2287 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 10:12:16.662313 kubelet[2287]: I0709 10:12:16.662275 2287 server.go:1289] "Started kubelet" Jul 9 10:12:16.662405 kubelet[2287]: I0709 10:12:16.662380 2287 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 10:12:16.663763 kubelet[2287]: I0709 10:12:16.663458 2287 server.go:317] "Adding debug handlers to kubelet server" Jul 9 10:12:16.665502 kubelet[2287]: I0709 10:12:16.665479 2287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 10:12:16.670582 kubelet[2287]: I0709 10:12:16.670552 2287 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 10:12:16.670831 kubelet[2287]: E0709 10:12:16.667907 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18508d97735e1519 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 10:12:16.662246681 +0000 UTC m=+0.712741282,LastTimestamp:2025-07-09 10:12:16.662246681 +0000 UTC m=+0.712741282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 10:12:16.671867 kubelet[2287]: E0709 10:12:16.671830 2287 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 10:12:16.672372 kubelet[2287]: E0709 10:12:16.672352 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:12:16.672590 kubelet[2287]: I0709 10:12:16.672431 2287 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 10:12:16.672860 kubelet[2287]: I0709 10:12:16.672807 2287 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 10:12:16.672860 kubelet[2287]: E0709 10:12:16.672844 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" Jul 9 10:12:16.673921 kubelet[2287]: I0709 10:12:16.672934 2287 reconciler.go:26] "Reconciler: start to sync state" Jul 9 10:12:16.673921 kubelet[2287]: I0709 10:12:16.673107 2287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 10:12:16.673921 kubelet[2287]: I0709 10:12:16.673317 2287 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 10:12:16.673921 kubelet[2287]: E0709 10:12:16.673381 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 10:12:16.674127 kubelet[2287]: I0709 10:12:16.674096 2287 factory.go:223] Registration of the systemd container factory successfully Jul 9 10:12:16.674202 kubelet[2287]: I0709 10:12:16.674183 2287 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 10:12:16.675275 kubelet[2287]: I0709 10:12:16.675256 2287 factory.go:223] Registration of the containerd container factory successfully Jul 9 10:12:16.683998 kubelet[2287]: I0709 10:12:16.683979 2287 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 10:12:16.683998 kubelet[2287]: I0709 10:12:16.683995 2287 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 10:12:16.684099 kubelet[2287]: I0709 10:12:16.684012 2287 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:12:16.686766 kubelet[2287]: I0709 10:12:16.686634 2287 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 10:12:16.687570 kubelet[2287]: I0709 10:12:16.687555 2287 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 10:12:16.687653 kubelet[2287]: I0709 10:12:16.687643 2287 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 10:12:16.687727 kubelet[2287]: I0709 10:12:16.687715 2287 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 10:12:16.687799 kubelet[2287]: I0709 10:12:16.687788 2287 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 10:12:16.687894 kubelet[2287]: E0709 10:12:16.687877 2287 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 10:12:16.693134 kubelet[2287]: E0709 10:12:16.693099 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 9 10:12:16.768462 kubelet[2287]: I0709 10:12:16.768422 2287 policy_none.go:49] "None policy: Start" Jul 9 10:12:16.768462 kubelet[2287]: I0709 10:12:16.768461 2287 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 10:12:16.768584 kubelet[2287]: I0709 10:12:16.768474 2287 state_mem.go:35] "Initializing new in-memory state store" Jul 9 10:12:16.773230 kubelet[2287]: E0709 10:12:16.773194 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:12:16.774189 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 10:12:16.788027 kubelet[2287]: E0709 10:12:16.787994 2287 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 10:12:16.799444 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 10:12:16.802629 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 10:12:16.822542 kubelet[2287]: E0709 10:12:16.822453 2287 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 10:12:16.822685 kubelet[2287]: I0709 10:12:16.822666 2287 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 10:12:16.822752 kubelet[2287]: I0709 10:12:16.822684 2287 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 10:12:16.823066 kubelet[2287]: I0709 10:12:16.823024 2287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 10:12:16.823834 kubelet[2287]: E0709 10:12:16.823778 2287 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 10:12:16.824141 kubelet[2287]: E0709 10:12:16.823843 2287 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 10:12:16.874316 kubelet[2287]: E0709 10:12:16.874210 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" Jul 9 10:12:16.926382 kubelet[2287]: I0709 10:12:16.924336 2287 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:16.926382 kubelet[2287]: E0709 10:12:16.924761 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 9 10:12:16.998584 systemd[1]: Created slice kubepods-burstable-pod92511fb5c82cc0e598b0d583913aad8c.slice - libcontainer container kubepods-burstable-pod92511fb5c82cc0e598b0d583913aad8c.slice. Jul 9 10:12:17.023216 kubelet[2287]: E0709 10:12:17.023069 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:17.026359 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 9 10:12:17.036806 kubelet[2287]: E0709 10:12:17.036757 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:17.039960 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 9 10:12:17.042056 kubelet[2287]: E0709 10:12:17.041878 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:17.075243 kubelet[2287]: I0709 10:12:17.075197 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92511fb5c82cc0e598b0d583913aad8c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"92511fb5c82cc0e598b0d583913aad8c\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:17.075579 kubelet[2287]: I0709 10:12:17.075401 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92511fb5c82cc0e598b0d583913aad8c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"92511fb5c82cc0e598b0d583913aad8c\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:17.075579 kubelet[2287]: I0709 10:12:17.075427 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:17.075579 kubelet[2287]: I0709 10:12:17.075446 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:17.075579 kubelet[2287]: I0709 10:12:17.075459 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92511fb5c82cc0e598b0d583913aad8c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"92511fb5c82cc0e598b0d583913aad8c\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:17.075579 kubelet[2287]: I0709 10:12:17.075475 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:17.075754 kubelet[2287]: I0709 10:12:17.075488 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:17.075754 kubelet[2287]: I0709 10:12:17.075504 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:17.075754 kubelet[2287]: I0709 10:12:17.075519 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:17.126480 kubelet[2287]: I0709 10:12:17.126370 2287 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:17.127142 kubelet[2287]: E0709 10:12:17.127071 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 9 10:12:17.275688 kubelet[2287]: E0709 10:12:17.275639 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" Jul 9 10:12:17.325597 containerd[1513]: time="2025-07-09T10:12:17.325540692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:92511fb5c82cc0e598b0d583913aad8c,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:17.337964 containerd[1513]: time="2025-07-09T10:12:17.337923909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:17.343189 containerd[1513]: time="2025-07-09T10:12:17.343154103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:17.343729 containerd[1513]: time="2025-07-09T10:12:17.343299920Z" level=info msg="connecting to shim c9dfbc88b912e23f29f5b534483e388d5a4eeacab16377dabd31aee0cbf19b9e" address="unix:///run/containerd/s/6b8304ff67b0c9c74610a2780780e57d1174c3413c5d7aa7aeb5ab95366f3db3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:17.360852 containerd[1513]: time="2025-07-09T10:12:17.360810091Z" level=info msg="connecting to shim 62e902ce401f90403f1dc38314dea72c94fb6a815179d5056fdd533d3cdc8c70" address="unix:///run/containerd/s/9752987cf1b7ca6614e8feb632bf5f1adf449b39fc62a32716f2586f7295389d" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:17.374003 containerd[1513]: time="2025-07-09T10:12:17.373963727Z" level=info msg="connecting to shim 1ee7e899ea687cfaa99ed2cf74d1a147be749bf2307f9216922f67378112591c" address="unix:///run/containerd/s/c4e10edfa400d3447bffbc5da03b927161b34dfaa843dd6b47fd1ecbf7ddb169" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:17.381917 systemd[1]: Started cri-containerd-c9dfbc88b912e23f29f5b534483e388d5a4eeacab16377dabd31aee0cbf19b9e.scope - libcontainer container c9dfbc88b912e23f29f5b534483e388d5a4eeacab16377dabd31aee0cbf19b9e. Jul 9 10:12:17.386230 systemd[1]: Started cri-containerd-62e902ce401f90403f1dc38314dea72c94fb6a815179d5056fdd533d3cdc8c70.scope - libcontainer container 62e902ce401f90403f1dc38314dea72c94fb6a815179d5056fdd533d3cdc8c70. Jul 9 10:12:17.401872 systemd[1]: Started cri-containerd-1ee7e899ea687cfaa99ed2cf74d1a147be749bf2307f9216922f67378112591c.scope - libcontainer container 1ee7e899ea687cfaa99ed2cf74d1a147be749bf2307f9216922f67378112591c. Jul 9 10:12:17.441042 containerd[1513]: time="2025-07-09T10:12:17.441005005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:92511fb5c82cc0e598b0d583913aad8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9dfbc88b912e23f29f5b534483e388d5a4eeacab16377dabd31aee0cbf19b9e\"" Jul 9 10:12:17.446879 containerd[1513]: time="2025-07-09T10:12:17.446832592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"62e902ce401f90403f1dc38314dea72c94fb6a815179d5056fdd533d3cdc8c70\"" Jul 9 10:12:17.448282 containerd[1513]: time="2025-07-09T10:12:17.448032538Z" level=info msg="CreateContainer within sandbox \"c9dfbc88b912e23f29f5b534483e388d5a4eeacab16377dabd31aee0cbf19b9e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 10:12:17.450800 containerd[1513]: time="2025-07-09T10:12:17.450771123Z" level=info msg="CreateContainer within sandbox \"62e902ce401f90403f1dc38314dea72c94fb6a815179d5056fdd533d3cdc8c70\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 10:12:17.458997 containerd[1513]: time="2025-07-09T10:12:17.458951505Z" level=info msg="Container b35c6944191c4cf22aa7439be897daba9569aee06d6f72b2fef4f0115da15987: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:17.472385 containerd[1513]: time="2025-07-09T10:12:17.472342714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ee7e899ea687cfaa99ed2cf74d1a147be749bf2307f9216922f67378112591c\"" Jul 9 10:12:17.478312 containerd[1513]: time="2025-07-09T10:12:17.478197592Z" level=info msg="CreateContainer within sandbox \"62e902ce401f90403f1dc38314dea72c94fb6a815179d5056fdd533d3cdc8c70\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b35c6944191c4cf22aa7439be897daba9569aee06d6f72b2fef4f0115da15987\"" Jul 9 10:12:17.478756 containerd[1513]: time="2025-07-09T10:12:17.478342368Z" level=info msg="CreateContainer within sandbox \"1ee7e899ea687cfaa99ed2cf74d1a147be749bf2307f9216922f67378112591c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 10:12:17.478756 containerd[1513]: time="2025-07-09T10:12:17.478516236Z" level=info msg="Container cdaadea6f514154c54379b0373bba0c37d96e17a8e3ba97293c90423ee762d22: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:17.478816 containerd[1513]: time="2025-07-09T10:12:17.478766853Z" level=info msg="StartContainer for \"b35c6944191c4cf22aa7439be897daba9569aee06d6f72b2fef4f0115da15987\"" Jul 9 10:12:17.480651 containerd[1513]: time="2025-07-09T10:12:17.480596125Z" level=info msg="connecting to shim b35c6944191c4cf22aa7439be897daba9569aee06d6f72b2fef4f0115da15987" address="unix:///run/containerd/s/9752987cf1b7ca6614e8feb632bf5f1adf449b39fc62a32716f2586f7295389d" protocol=ttrpc version=3 Jul 9 10:12:17.487329 containerd[1513]: time="2025-07-09T10:12:17.487275843Z" level=info msg="CreateContainer within sandbox \"c9dfbc88b912e23f29f5b534483e388d5a4eeacab16377dabd31aee0cbf19b9e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cdaadea6f514154c54379b0373bba0c37d96e17a8e3ba97293c90423ee762d22\"" Jul 9 10:12:17.487980 containerd[1513]: time="2025-07-09T10:12:17.487885120Z" level=info msg="StartContainer for \"cdaadea6f514154c54379b0373bba0c37d96e17a8e3ba97293c90423ee762d22\"" Jul 9 10:12:17.489475 containerd[1513]: time="2025-07-09T10:12:17.489298070Z" level=info msg="Container 212afd8c55af7a48191e6853d97f33d9e464867a0fb9ce0d6c55d85a1318c145: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:17.489475 containerd[1513]: time="2025-07-09T10:12:17.489304472Z" level=info msg="connecting to shim cdaadea6f514154c54379b0373bba0c37d96e17a8e3ba97293c90423ee762d22" address="unix:///run/containerd/s/6b8304ff67b0c9c74610a2780780e57d1174c3413c5d7aa7aeb5ab95366f3db3" protocol=ttrpc version=3 Jul 9 10:12:17.499066 containerd[1513]: time="2025-07-09T10:12:17.499024653Z" level=info msg="CreateContainer within sandbox \"1ee7e899ea687cfaa99ed2cf74d1a147be749bf2307f9216922f67378112591c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"212afd8c55af7a48191e6853d97f33d9e464867a0fb9ce0d6c55d85a1318c145\"" Jul 9 10:12:17.499529 containerd[1513]: time="2025-07-09T10:12:17.499503039Z" level=info msg="StartContainer for \"212afd8c55af7a48191e6853d97f33d9e464867a0fb9ce0d6c55d85a1318c145\"" Jul 9 10:12:17.499862 systemd[1]: Started cri-containerd-b35c6944191c4cf22aa7439be897daba9569aee06d6f72b2fef4f0115da15987.scope - libcontainer container b35c6944191c4cf22aa7439be897daba9569aee06d6f72b2fef4f0115da15987. Jul 9 10:12:17.500494 containerd[1513]: time="2025-07-09T10:12:17.500455289Z" level=info msg="connecting to shim 212afd8c55af7a48191e6853d97f33d9e464867a0fb9ce0d6c55d85a1318c145" address="unix:///run/containerd/s/c4e10edfa400d3447bffbc5da03b927161b34dfaa843dd6b47fd1ecbf7ddb169" protocol=ttrpc version=3 Jul 9 10:12:17.513878 systemd[1]: Started cri-containerd-cdaadea6f514154c54379b0373bba0c37d96e17a8e3ba97293c90423ee762d22.scope - libcontainer container cdaadea6f514154c54379b0373bba0c37d96e17a8e3ba97293c90423ee762d22. Jul 9 10:12:17.518739 systemd[1]: Started cri-containerd-212afd8c55af7a48191e6853d97f33d9e464867a0fb9ce0d6c55d85a1318c145.scope - libcontainer container 212afd8c55af7a48191e6853d97f33d9e464867a0fb9ce0d6c55d85a1318c145. Jul 9 10:12:17.529734 kubelet[2287]: I0709 10:12:17.529660 2287 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:17.530493 kubelet[2287]: E0709 10:12:17.530330 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 9 10:12:17.556676 containerd[1513]: time="2025-07-09T10:12:17.556639984Z" level=info msg="StartContainer for \"b35c6944191c4cf22aa7439be897daba9569aee06d6f72b2fef4f0115da15987\" returns successfully" Jul 9 10:12:17.593640 containerd[1513]: time="2025-07-09T10:12:17.593603922Z" level=info msg="StartContainer for \"212afd8c55af7a48191e6853d97f33d9e464867a0fb9ce0d6c55d85a1318c145\" returns successfully" Jul 9 10:12:17.607373 containerd[1513]: time="2025-07-09T10:12:17.605818273Z" level=info msg="StartContainer for \"cdaadea6f514154c54379b0373bba0c37d96e17a8e3ba97293c90423ee762d22\" returns successfully" Jul 9 10:12:17.700128 kubelet[2287]: E0709 10:12:17.699047 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:17.702371 kubelet[2287]: E0709 10:12:17.702348 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:17.706740 kubelet[2287]: E0709 10:12:17.705585 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:17.714448 kubelet[2287]: E0709 10:12:17.714402 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 9 10:12:17.753287 kubelet[2287]: E0709 10:12:17.753229 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 10:12:18.331896 kubelet[2287]: I0709 10:12:18.331847 2287 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:18.707260 kubelet[2287]: E0709 10:12:18.707167 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:18.709328 kubelet[2287]: E0709 10:12:18.709028 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:19.207974 kubelet[2287]: E0709 10:12:19.207943 2287 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 10:12:19.303183 kubelet[2287]: I0709 10:12:19.302006 2287 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 10:12:19.303183 kubelet[2287]: E0709 10:12:19.302053 2287 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 9 10:12:19.373333 kubelet[2287]: I0709 10:12:19.373296 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:19.384817 kubelet[2287]: E0709 10:12:19.384771 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:19.384817 kubelet[2287]: I0709 10:12:19.384802 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:19.388954 kubelet[2287]: E0709 10:12:19.388915 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:19.388954 kubelet[2287]: I0709 10:12:19.388940 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:19.393926 kubelet[2287]: E0709 10:12:19.393886 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:19.666473 kubelet[2287]: I0709 10:12:19.663726 2287 apiserver.go:52] "Watching apiserver" Jul 9 10:12:19.673635 kubelet[2287]: I0709 10:12:19.673596 2287 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 10:12:21.061362 kubelet[2287]: I0709 10:12:21.061329 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:21.594089 kubelet[2287]: I0709 10:12:21.594048 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:21.594947 systemd[1]: Reload requested from client PID 2573 ('systemctl') (unit session-7.scope)... Jul 9 10:12:21.594962 systemd[1]: Reloading... Jul 9 10:12:21.669832 zram_generator::config[2619]: No configuration found. Jul 9 10:12:21.733583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:12:21.833201 systemd[1]: Reloading finished in 237 ms. Jul 9 10:12:21.867113 kubelet[2287]: I0709 10:12:21.866419 2287 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 10:12:21.866678 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:21.877027 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 10:12:21.877223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:21.877275 systemd[1]: kubelet.service: Consumed 1.110s CPU time, 129.2M memory peak. Jul 9 10:12:21.879343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:22.020416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:22.024391 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 10:12:22.055841 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:12:22.055841 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 10:12:22.055841 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:12:22.056266 kubelet[2658]: I0709 10:12:22.055892 2658 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 10:12:22.063728 kubelet[2658]: I0709 10:12:22.063594 2658 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 10:12:22.063728 kubelet[2658]: I0709 10:12:22.063622 2658 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 10:12:22.064315 kubelet[2658]: I0709 10:12:22.064287 2658 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 10:12:22.066670 kubelet[2658]: I0709 10:12:22.066058 2658 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 9 10:12:22.068765 kubelet[2658]: I0709 10:12:22.068743 2658 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 10:12:22.074757 kubelet[2658]: I0709 10:12:22.074733 2658 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 10:12:22.078226 kubelet[2658]: I0709 10:12:22.078159 2658 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 10:12:22.078655 kubelet[2658]: I0709 10:12:22.078630 2658 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 10:12:22.080308 kubelet[2658]: I0709 10:12:22.078994 2658 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 10:12:22.080308 kubelet[2658]: I0709 10:12:22.079177 2658 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 10:12:22.080308 kubelet[2658]: I0709 10:12:22.079185 2658 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 10:12:22.080308 kubelet[2658]: I0709 10:12:22.079234 2658 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:12:22.080308 kubelet[2658]: I0709 10:12:22.079384 2658 kubelet.go:480] "Attempting to sync node with API server" Jul 9 10:12:22.082866 kubelet[2658]: I0709 10:12:22.079405 2658 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 10:12:22.082866 kubelet[2658]: I0709 10:12:22.079425 2658 kubelet.go:386] "Adding apiserver pod source" Jul 9 10:12:22.082866 kubelet[2658]: I0709 10:12:22.079438 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 10:12:22.082866 kubelet[2658]: I0709 10:12:22.082733 2658 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 10:12:22.084405 kubelet[2658]: I0709 10:12:22.083300 2658 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 10:12:22.087786 kubelet[2658]: I0709 10:12:22.087760 2658 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 10:12:22.087855 kubelet[2658]: I0709 10:12:22.087801 2658 server.go:1289] "Started kubelet" Jul 9 10:12:22.091723 kubelet[2658]: I0709 10:12:22.089700 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 10:12:22.095011 kubelet[2658]: I0709 10:12:22.094943 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 10:12:22.095327 kubelet[2658]: I0709 10:12:22.095310 2658 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 10:12:22.095862 kubelet[2658]: I0709 10:12:22.095834 2658 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 10:12:22.096113 kubelet[2658]: I0709 10:12:22.096080 2658 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 10:12:22.096440 kubelet[2658]: E0709 10:12:22.096400 2658 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:12:22.096760 kubelet[2658]: I0709 10:12:22.094693 2658 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 10:12:22.097064 kubelet[2658]: I0709 10:12:22.097050 2658 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 10:12:22.098754 kubelet[2658]: I0709 10:12:22.098358 2658 reconciler.go:26] "Reconciler: start to sync state" Jul 9 10:12:22.099745 kubelet[2658]: I0709 10:12:22.099685 2658 server.go:317] "Adding debug handlers to kubelet server" Jul 9 10:12:22.103797 kubelet[2658]: I0709 10:12:22.103766 2658 factory.go:223] Registration of the systemd container factory successfully Jul 9 10:12:22.103883 kubelet[2658]: I0709 10:12:22.103857 2658 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 10:12:22.106346 kubelet[2658]: I0709 10:12:22.106306 2658 factory.go:223] Registration of the containerd container factory successfully Jul 9 10:12:22.107627 kubelet[2658]: I0709 10:12:22.107488 2658 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 10:12:22.110431 kubelet[2658]: I0709 10:12:22.110114 2658 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 10:12:22.110431 kubelet[2658]: I0709 10:12:22.110136 2658 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 10:12:22.110431 kubelet[2658]: I0709 10:12:22.110153 2658 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 10:12:22.110431 kubelet[2658]: I0709 10:12:22.110159 2658 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 10:12:22.110431 kubelet[2658]: E0709 10:12:22.110200 2658 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139358 2658 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139376 2658 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139404 2658 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139534 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139543 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139558 2658 policy_none.go:49] "None policy: Start" Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139566 2658 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139574 2658 state_mem.go:35] "Initializing new in-memory state store" Jul 9 10:12:22.140001 kubelet[2658]: I0709 10:12:22.139658 2658 state_mem.go:75] "Updated machine memory state" Jul 9 10:12:22.143813 kubelet[2658]: E0709 10:12:22.143790 2658 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 10:12:22.144688 kubelet[2658]: I0709 10:12:22.144674 2658 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 10:12:22.144973 kubelet[2658]: I0709 10:12:22.144843 2658 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 10:12:22.145409 kubelet[2658]: I0709 10:12:22.145188 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 10:12:22.145626 kubelet[2658]: E0709 10:12:22.145606 2658 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 10:12:22.211540 kubelet[2658]: I0709 10:12:22.211243 2658 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:22.211540 kubelet[2658]: I0709 10:12:22.211403 2658 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:22.211900 kubelet[2658]: I0709 10:12:22.211882 2658 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:22.216365 kubelet[2658]: E0709 10:12:22.216308 2658 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:22.216689 kubelet[2658]: E0709 10:12:22.216664 2658 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:22.247172 kubelet[2658]: I0709 10:12:22.247138 2658 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:22.253048 kubelet[2658]: I0709 10:12:22.253025 2658 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 10:12:22.253191 kubelet[2658]: I0709 10:12:22.253093 2658 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 10:12:22.300417 kubelet[2658]: I0709 10:12:22.300348 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92511fb5c82cc0e598b0d583913aad8c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"92511fb5c82cc0e598b0d583913aad8c\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:22.300417 kubelet[2658]: I0709 10:12:22.300390 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:22.300637 kubelet[2658]: I0709 10:12:22.300583 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:22.300637 kubelet[2658]: I0709 10:12:22.300615 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:22.300780 kubelet[2658]: I0709 10:12:22.300731 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92511fb5c82cc0e598b0d583913aad8c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"92511fb5c82cc0e598b0d583913aad8c\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:22.300780 kubelet[2658]: I0709 10:12:22.300757 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92511fb5c82cc0e598b0d583913aad8c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"92511fb5c82cc0e598b0d583913aad8c\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:22.300929 kubelet[2658]: I0709 10:12:22.300771 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:22.300929 kubelet[2658]: I0709 10:12:22.300875 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:22.300929 kubelet[2658]: I0709 10:12:22.300899 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:23.081093 kubelet[2658]: I0709 10:12:23.080901 2658 apiserver.go:52] "Watching apiserver" Jul 9 10:12:23.097227 kubelet[2658]: I0709 10:12:23.097185 2658 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 10:12:23.128076 kubelet[2658]: I0709 10:12:23.128043 2658 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:23.129199 kubelet[2658]: I0709 10:12:23.129146 2658 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:23.133713 kubelet[2658]: E0709 10:12:23.133680 2658 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:23.136043 kubelet[2658]: E0709 10:12:23.135535 2658 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:23.147043 kubelet[2658]: I0709 10:12:23.146966 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.146952107 podStartE2EDuration="1.146952107s" podCreationTimestamp="2025-07-09 10:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:23.146358477 +0000 UTC m=+1.118804341" watchObservedRunningTime="2025-07-09 10:12:23.146952107 +0000 UTC m=+1.119397971" Jul 9 10:12:23.161910 kubelet[2658]: I0709 10:12:23.161843 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.16182617 podStartE2EDuration="2.16182617s" podCreationTimestamp="2025-07-09 10:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:23.15389654 +0000 UTC m=+1.126342404" watchObservedRunningTime="2025-07-09 10:12:23.16182617 +0000 UTC m=+1.134272034" Jul 9 10:12:23.162203 kubelet[2658]: I0709 10:12:23.161989 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.161984341 podStartE2EDuration="2.161984341s" podCreationTimestamp="2025-07-09 10:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:23.161015549 +0000 UTC m=+1.133461413" watchObservedRunningTime="2025-07-09 10:12:23.161984341 +0000 UTC m=+1.134430205" Jul 9 10:12:26.045400 kubelet[2658]: I0709 10:12:26.045372 2658 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 10:12:26.046179 containerd[1513]: time="2025-07-09T10:12:26.046076209Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 10:12:26.046460 kubelet[2658]: I0709 10:12:26.046218 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 10:12:26.968555 systemd[1]: Created slice kubepods-besteffort-pod1c858110_a96a_4512_b9de_37d995b63e7f.slice - libcontainer container kubepods-besteffort-pod1c858110_a96a_4512_b9de_37d995b63e7f.slice. Jul 9 10:12:27.029856 kubelet[2658]: I0709 10:12:27.029747 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c858110-a96a-4512-b9de-37d995b63e7f-kube-proxy\") pod \"kube-proxy-9lwcv\" (UID: \"1c858110-a96a-4512-b9de-37d995b63e7f\") " pod="kube-system/kube-proxy-9lwcv" Jul 9 10:12:27.029856 kubelet[2658]: I0709 10:12:27.029796 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c858110-a96a-4512-b9de-37d995b63e7f-xtables-lock\") pod \"kube-proxy-9lwcv\" (UID: \"1c858110-a96a-4512-b9de-37d995b63e7f\") " pod="kube-system/kube-proxy-9lwcv" Jul 9 10:12:27.029856 kubelet[2658]: I0709 10:12:27.029832 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c858110-a96a-4512-b9de-37d995b63e7f-lib-modules\") pod \"kube-proxy-9lwcv\" (UID: \"1c858110-a96a-4512-b9de-37d995b63e7f\") " pod="kube-system/kube-proxy-9lwcv" Jul 9 10:12:27.029856 kubelet[2658]: I0709 10:12:27.029854 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgcc\" (UniqueName: \"kubernetes.io/projected/1c858110-a96a-4512-b9de-37d995b63e7f-kube-api-access-cxgcc\") pod \"kube-proxy-9lwcv\" (UID: \"1c858110-a96a-4512-b9de-37d995b63e7f\") " pod="kube-system/kube-proxy-9lwcv" Jul 9 10:12:27.185678 systemd[1]: Created slice kubepods-besteffort-podc231b78d_f0fd_4275_a8a2_dab42f804460.slice - libcontainer container kubepods-besteffort-podc231b78d_f0fd_4275_a8a2_dab42f804460.slice. Jul 9 10:12:27.231647 kubelet[2658]: I0709 10:12:27.231505 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c231b78d-f0fd-4275-a8a2-dab42f804460-var-lib-calico\") pod \"tigera-operator-747864d56d-kfb8p\" (UID: \"c231b78d-f0fd-4275-a8a2-dab42f804460\") " pod="tigera-operator/tigera-operator-747864d56d-kfb8p" Jul 9 10:12:27.231647 kubelet[2658]: I0709 10:12:27.231564 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cb5h\" (UniqueName: \"kubernetes.io/projected/c231b78d-f0fd-4275-a8a2-dab42f804460-kube-api-access-2cb5h\") pod \"tigera-operator-747864d56d-kfb8p\" (UID: \"c231b78d-f0fd-4275-a8a2-dab42f804460\") " pod="tigera-operator/tigera-operator-747864d56d-kfb8p" Jul 9 10:12:27.289926 containerd[1513]: time="2025-07-09T10:12:27.289879045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lwcv,Uid:1c858110-a96a-4512-b9de-37d995b63e7f,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:27.308212 containerd[1513]: time="2025-07-09T10:12:27.307901269Z" level=info msg="connecting to shim 0a1e05070b08ed1f82a24e70235aae8dee20926256474444b464fa8fce92e878" address="unix:///run/containerd/s/8b39dea46f993759682c2e3e37babc0c68a08d333b95e1cf33cd1ef8e1a024f8" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:27.332901 systemd[1]: Started cri-containerd-0a1e05070b08ed1f82a24e70235aae8dee20926256474444b464fa8fce92e878.scope - libcontainer container 0a1e05070b08ed1f82a24e70235aae8dee20926256474444b464fa8fce92e878. Jul 9 10:12:27.357307 containerd[1513]: time="2025-07-09T10:12:27.357264769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lwcv,Uid:1c858110-a96a-4512-b9de-37d995b63e7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a1e05070b08ed1f82a24e70235aae8dee20926256474444b464fa8fce92e878\"" Jul 9 10:12:27.361506 containerd[1513]: time="2025-07-09T10:12:27.361466399Z" level=info msg="CreateContainer within sandbox \"0a1e05070b08ed1f82a24e70235aae8dee20926256474444b464fa8fce92e878\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 10:12:27.369960 containerd[1513]: time="2025-07-09T10:12:27.369922313Z" level=info msg="Container fe47d633d69879b8272022185ae0d48d3c47a012b053bf5da1d0ba44fc9ca380: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:27.377158 containerd[1513]: time="2025-07-09T10:12:27.377111069Z" level=info msg="CreateContainer within sandbox \"0a1e05070b08ed1f82a24e70235aae8dee20926256474444b464fa8fce92e878\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe47d633d69879b8272022185ae0d48d3c47a012b053bf5da1d0ba44fc9ca380\"" Jul 9 10:12:27.377834 containerd[1513]: time="2025-07-09T10:12:27.377808347Z" level=info msg="StartContainer for \"fe47d633d69879b8272022185ae0d48d3c47a012b053bf5da1d0ba44fc9ca380\"" Jul 9 10:12:27.379722 containerd[1513]: time="2025-07-09T10:12:27.379661872Z" level=info msg="connecting to shim fe47d633d69879b8272022185ae0d48d3c47a012b053bf5da1d0ba44fc9ca380" address="unix:///run/containerd/s/8b39dea46f993759682c2e3e37babc0c68a08d333b95e1cf33cd1ef8e1a024f8" protocol=ttrpc version=3 Jul 9 10:12:27.405888 systemd[1]: Started cri-containerd-fe47d633d69879b8272022185ae0d48d3c47a012b053bf5da1d0ba44fc9ca380.scope - libcontainer container fe47d633d69879b8272022185ae0d48d3c47a012b053bf5da1d0ba44fc9ca380. Jul 9 10:12:27.438849 containerd[1513]: time="2025-07-09T10:12:27.438808902Z" level=info msg="StartContainer for \"fe47d633d69879b8272022185ae0d48d3c47a012b053bf5da1d0ba44fc9ca380\" returns successfully" Jul 9 10:12:27.489029 containerd[1513]: time="2025-07-09T10:12:27.488872799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-kfb8p,Uid:c231b78d-f0fd-4275-a8a2-dab42f804460,Namespace:tigera-operator,Attempt:0,}" Jul 9 10:12:27.508643 containerd[1513]: time="2025-07-09T10:12:27.508581381Z" level=info msg="connecting to shim 416c9d42eeb56b39b738f877ca5af78e7ee39a6eb000ace772e1d79835018ffa" address="unix:///run/containerd/s/07eaef41944371839972ce0f4d43b41de62d2ef620cde500a2389ea3b583c8c5" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:27.533872 systemd[1]: Started cri-containerd-416c9d42eeb56b39b738f877ca5af78e7ee39a6eb000ace772e1d79835018ffa.scope - libcontainer container 416c9d42eeb56b39b738f877ca5af78e7ee39a6eb000ace772e1d79835018ffa. Jul 9 10:12:27.564830 containerd[1513]: time="2025-07-09T10:12:27.564783857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-kfb8p,Uid:c231b78d-f0fd-4275-a8a2-dab42f804460,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"416c9d42eeb56b39b738f877ca5af78e7ee39a6eb000ace772e1d79835018ffa\"" Jul 9 10:12:27.567306 containerd[1513]: time="2025-07-09T10:12:27.567280964Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 9 10:12:28.145797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125440944.mount: Deactivated successfully. Jul 9 10:12:28.668261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296174630.mount: Deactivated successfully. Jul 9 10:12:28.982798 containerd[1513]: time="2025-07-09T10:12:28.982670716Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:28.983251 containerd[1513]: time="2025-07-09T10:12:28.983208104Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 9 10:12:28.984204 containerd[1513]: time="2025-07-09T10:12:28.984163206Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:28.986097 containerd[1513]: time="2025-07-09T10:12:28.986058166Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:28.986825 containerd[1513]: time="2025-07-09T10:12:28.986792087Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.419256931s" Jul 9 10:12:28.986878 containerd[1513]: time="2025-07-09T10:12:28.986825056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 9 10:12:28.990206 containerd[1513]: time="2025-07-09T10:12:28.990169734Z" level=info msg="CreateContainer within sandbox \"416c9d42eeb56b39b738f877ca5af78e7ee39a6eb000ace772e1d79835018ffa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 9 10:12:28.996397 containerd[1513]: time="2025-07-09T10:12:28.996350229Z" level=info msg="Container 35fd7a5d017efa6dc259683afba920c4156971e15743755596d999e505225ff6: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:29.002775 containerd[1513]: time="2025-07-09T10:12:29.002700114Z" level=info msg="CreateContainer within sandbox \"416c9d42eeb56b39b738f877ca5af78e7ee39a6eb000ace772e1d79835018ffa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"35fd7a5d017efa6dc259683afba920c4156971e15743755596d999e505225ff6\"" Jul 9 10:12:29.003233 containerd[1513]: time="2025-07-09T10:12:29.003205209Z" level=info msg="StartContainer for \"35fd7a5d017efa6dc259683afba920c4156971e15743755596d999e505225ff6\"" Jul 9 10:12:29.004420 containerd[1513]: time="2025-07-09T10:12:29.004356475Z" level=info msg="connecting to shim 35fd7a5d017efa6dc259683afba920c4156971e15743755596d999e505225ff6" address="unix:///run/containerd/s/07eaef41944371839972ce0f4d43b41de62d2ef620cde500a2389ea3b583c8c5" protocol=ttrpc version=3 Jul 9 10:12:29.026954 systemd[1]: Started cri-containerd-35fd7a5d017efa6dc259683afba920c4156971e15743755596d999e505225ff6.scope - libcontainer container 35fd7a5d017efa6dc259683afba920c4156971e15743755596d999e505225ff6. Jul 9 10:12:29.069218 containerd[1513]: time="2025-07-09T10:12:29.069167740Z" level=info msg="StartContainer for \"35fd7a5d017efa6dc259683afba920c4156971e15743755596d999e505225ff6\" returns successfully" Jul 9 10:12:29.155119 kubelet[2658]: I0709 10:12:29.155042 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9lwcv" podStartSLOduration=3.155023158 podStartE2EDuration="3.155023158s" podCreationTimestamp="2025-07-09 10:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:28.149233905 +0000 UTC m=+6.121679769" watchObservedRunningTime="2025-07-09 10:12:29.155023158 +0000 UTC m=+7.127469022" Jul 9 10:12:29.155488 kubelet[2658]: I0709 10:12:29.155188 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-kfb8p" podStartSLOduration=0.7337513 podStartE2EDuration="2.15518184s" podCreationTimestamp="2025-07-09 10:12:27 +0000 UTC" firstStartedPulling="2025-07-09 10:12:27.566092988 +0000 UTC m=+5.538538852" lastFinishedPulling="2025-07-09 10:12:28.987523528 +0000 UTC m=+6.959969392" observedRunningTime="2025-07-09 10:12:29.154735642 +0000 UTC m=+7.127181506" watchObservedRunningTime="2025-07-09 10:12:29.15518184 +0000 UTC m=+7.127627704" Jul 9 10:12:34.500913 sudo[1715]: pam_unix(sudo:session): session closed for user root Jul 9 10:12:34.502442 sshd[1714]: Connection closed by 10.0.0.1 port 60672 Jul 9 10:12:34.503156 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jul 9 10:12:34.509751 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Jul 9 10:12:34.511098 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:60672.service: Deactivated successfully. Jul 9 10:12:34.513558 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 10:12:34.513850 systemd[1]: session-7.scope: Consumed 7.526s CPU time, 225.1M memory peak. Jul 9 10:12:34.517041 systemd-logind[1485]: Removed session 7. Jul 9 10:12:36.457838 update_engine[1491]: I20250709 10:12:36.457769 1491 update_attempter.cc:509] Updating boot flags... Jul 9 10:12:39.729694 systemd[1]: Created slice kubepods-besteffort-pod63beb49d_ead2_4f8a_956b_883b3d697254.slice - libcontainer container kubepods-besteffort-pod63beb49d_ead2_4f8a_956b_883b3d697254.slice. Jul 9 10:12:39.813466 kubelet[2658]: I0709 10:12:39.813125 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fqjq\" (UniqueName: \"kubernetes.io/projected/63beb49d-ead2-4f8a-956b-883b3d697254-kube-api-access-8fqjq\") pod \"calico-typha-7b8d8cf9ff-bwr8t\" (UID: \"63beb49d-ead2-4f8a-956b-883b3d697254\") " pod="calico-system/calico-typha-7b8d8cf9ff-bwr8t" Jul 9 10:12:39.814114 kubelet[2658]: I0709 10:12:39.813395 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/63beb49d-ead2-4f8a-956b-883b3d697254-typha-certs\") pod \"calico-typha-7b8d8cf9ff-bwr8t\" (UID: \"63beb49d-ead2-4f8a-956b-883b3d697254\") " pod="calico-system/calico-typha-7b8d8cf9ff-bwr8t" Jul 9 10:12:39.814114 kubelet[2658]: I0709 10:12:39.813927 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63beb49d-ead2-4f8a-956b-883b3d697254-tigera-ca-bundle\") pod \"calico-typha-7b8d8cf9ff-bwr8t\" (UID: \"63beb49d-ead2-4f8a-956b-883b3d697254\") " pod="calico-system/calico-typha-7b8d8cf9ff-bwr8t" Jul 9 10:12:40.026669 systemd[1]: Created slice kubepods-besteffort-pod1c089a46_f39a_448b_b707_92d5021a1c1d.slice - libcontainer container kubepods-besteffort-pod1c089a46_f39a_448b_b707_92d5021a1c1d.slice. Jul 9 10:12:40.043653 containerd[1513]: time="2025-07-09T10:12:40.043601373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b8d8cf9ff-bwr8t,Uid:63beb49d-ead2-4f8a-956b-883b3d697254,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:40.082837 containerd[1513]: time="2025-07-09T10:12:40.082768875Z" level=info msg="connecting to shim 6a2bcd20eeba67ea70c84b32890dca4b6160a32113c1196343178ba6461ffb7d" address="unix:///run/containerd/s/3733d27c0aa5fde4d192e7a3b0885d724d6f3b2bdcc06dc283a177cc8122a380" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:40.116922 kubelet[2658]: I0709 10:12:40.116864 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-xtables-lock\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117048 kubelet[2658]: I0709 10:12:40.116951 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlvn\" (UniqueName: \"kubernetes.io/projected/1c089a46-f39a-448b-b707-92d5021a1c1d-kube-api-access-wqlvn\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117048 kubelet[2658]: I0709 10:12:40.116972 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-var-lib-calico\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117048 kubelet[2658]: I0709 10:12:40.116988 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-lib-modules\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117048 kubelet[2658]: I0709 10:12:40.117023 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-cni-bin-dir\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117048 kubelet[2658]: I0709 10:12:40.117037 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-policysync\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117151 kubelet[2658]: I0709 10:12:40.117050 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-var-run-calico\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117151 kubelet[2658]: I0709 10:12:40.117066 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1c089a46-f39a-448b-b707-92d5021a1c1d-node-certs\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117151 kubelet[2658]: I0709 10:12:40.117101 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-cni-net-dir\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117151 kubelet[2658]: I0709 10:12:40.117117 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-flexvol-driver-host\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117151 kubelet[2658]: I0709 10:12:40.117133 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c089a46-f39a-448b-b707-92d5021a1c1d-tigera-ca-bundle\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.117245 kubelet[2658]: I0709 10:12:40.117148 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1c089a46-f39a-448b-b707-92d5021a1c1d-cni-log-dir\") pod \"calico-node-2qr59\" (UID: \"1c089a46-f39a-448b-b707-92d5021a1c1d\") " pod="calico-system/calico-node-2qr59" Jul 9 10:12:40.143872 systemd[1]: Started cri-containerd-6a2bcd20eeba67ea70c84b32890dca4b6160a32113c1196343178ba6461ffb7d.scope - libcontainer container 6a2bcd20eeba67ea70c84b32890dca4b6160a32113c1196343178ba6461ffb7d. Jul 9 10:12:40.205215 containerd[1513]: time="2025-07-09T10:12:40.205173859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b8d8cf9ff-bwr8t,Uid:63beb49d-ead2-4f8a-956b-883b3d697254,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a2bcd20eeba67ea70c84b32890dca4b6160a32113c1196343178ba6461ffb7d\"" Jul 9 10:12:40.215281 containerd[1513]: time="2025-07-09T10:12:40.215168173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 9 10:12:40.220495 kubelet[2658]: E0709 10:12:40.220468 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.220495 kubelet[2658]: W0709 10:12:40.220490 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.223147 kubelet[2658]: E0709 10:12:40.223120 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.231876 kubelet[2658]: E0709 10:12:40.231858 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.231876 kubelet[2658]: W0709 10:12:40.231874 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.231975 kubelet[2658]: E0709 10:12:40.231890 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.286158 kubelet[2658]: E0709 10:12:40.285856 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp5rv" podUID="80cd3875-f2cb-436b-b9bd-83f74aa24913" Jul 9 10:12:40.306170 kubelet[2658]: E0709 10:12:40.306140 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.306170 kubelet[2658]: W0709 10:12:40.306163 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.306477 kubelet[2658]: E0709 10:12:40.306185 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.306477 kubelet[2658]: E0709 10:12:40.306334 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.311523 kubelet[2658]: W0709 10:12:40.306342 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.311598 kubelet[2658]: E0709 10:12:40.311537 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.311833 kubelet[2658]: E0709 10:12:40.311818 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.311867 kubelet[2658]: W0709 10:12:40.311832 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.311867 kubelet[2658]: E0709 10:12:40.311843 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.312107 kubelet[2658]: E0709 10:12:40.312091 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.312107 kubelet[2658]: W0709 10:12:40.312105 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.312167 kubelet[2658]: E0709 10:12:40.312115 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.312287 kubelet[2658]: E0709 10:12:40.312275 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.312287 kubelet[2658]: W0709 10:12:40.312286 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.312337 kubelet[2658]: E0709 10:12:40.312294 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.312613 kubelet[2658]: E0709 10:12:40.312596 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.312613 kubelet[2658]: W0709 10:12:40.312612 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.312667 kubelet[2658]: E0709 10:12:40.312622 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.312811 kubelet[2658]: E0709 10:12:40.312797 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.312811 kubelet[2658]: W0709 10:12:40.312809 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.312868 kubelet[2658]: E0709 10:12:40.312818 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.312978 kubelet[2658]: E0709 10:12:40.312967 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.312978 kubelet[2658]: W0709 10:12:40.312977 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.313034 kubelet[2658]: E0709 10:12:40.312985 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.313137 kubelet[2658]: E0709 10:12:40.313124 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.313137 kubelet[2658]: W0709 10:12:40.313135 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.313183 kubelet[2658]: E0709 10:12:40.313142 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.313269 kubelet[2658]: E0709 10:12:40.313257 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.313269 kubelet[2658]: W0709 10:12:40.313267 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.313315 kubelet[2658]: E0709 10:12:40.313276 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.313412 kubelet[2658]: E0709 10:12:40.313391 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.313412 kubelet[2658]: W0709 10:12:40.313398 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.313468 kubelet[2658]: E0709 10:12:40.313414 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.313559 kubelet[2658]: E0709 10:12:40.313547 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.313583 kubelet[2658]: W0709 10:12:40.313559 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.313583 kubelet[2658]: E0709 10:12:40.313567 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.313722 kubelet[2658]: E0709 10:12:40.313701 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.313722 kubelet[2658]: W0709 10:12:40.313721 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.313767 kubelet[2658]: E0709 10:12:40.313728 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.314270 kubelet[2658]: E0709 10:12:40.314253 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.314306 kubelet[2658]: W0709 10:12:40.314271 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.314306 kubelet[2658]: E0709 10:12:40.314284 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.314478 kubelet[2658]: E0709 10:12:40.314464 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.314507 kubelet[2658]: W0709 10:12:40.314481 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.314507 kubelet[2658]: E0709 10:12:40.314490 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.314669 kubelet[2658]: E0709 10:12:40.314655 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.314669 kubelet[2658]: W0709 10:12:40.314667 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.314739 kubelet[2658]: E0709 10:12:40.314676 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.314852 kubelet[2658]: E0709 10:12:40.314837 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.314852 kubelet[2658]: W0709 10:12:40.314851 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.314901 kubelet[2658]: E0709 10:12:40.314859 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.315216 kubelet[2658]: E0709 10:12:40.314984 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.315216 kubelet[2658]: W0709 10:12:40.314995 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.315216 kubelet[2658]: E0709 10:12:40.315003 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.315216 kubelet[2658]: E0709 10:12:40.315164 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.315216 kubelet[2658]: W0709 10:12:40.315173 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.315216 kubelet[2658]: E0709 10:12:40.315185 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.315373 kubelet[2658]: E0709 10:12:40.315321 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.315373 kubelet[2658]: W0709 10:12:40.315330 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.315373 kubelet[2658]: E0709 10:12:40.315338 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.318859 kubelet[2658]: E0709 10:12:40.318829 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.318859 kubelet[2658]: W0709 10:12:40.318847 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.318859 kubelet[2658]: E0709 10:12:40.318859 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.318996 kubelet[2658]: I0709 10:12:40.318883 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80cd3875-f2cb-436b-b9bd-83f74aa24913-kubelet-dir\") pod \"csi-node-driver-gp5rv\" (UID: \"80cd3875-f2cb-436b-b9bd-83f74aa24913\") " pod="calico-system/csi-node-driver-gp5rv" Jul 9 10:12:40.319095 kubelet[2658]: E0709 10:12:40.319065 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.319095 kubelet[2658]: W0709 10:12:40.319080 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.319095 kubelet[2658]: E0709 10:12:40.319089 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.319095 kubelet[2658]: I0709 10:12:40.319108 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/80cd3875-f2cb-436b-b9bd-83f74aa24913-socket-dir\") pod \"csi-node-driver-gp5rv\" (UID: \"80cd3875-f2cb-436b-b9bd-83f74aa24913\") " pod="calico-system/csi-node-driver-gp5rv" Jul 9 10:12:40.319516 kubelet[2658]: E0709 10:12:40.319299 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.319516 kubelet[2658]: W0709 10:12:40.319314 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.319516 kubelet[2658]: E0709 10:12:40.319326 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.319629 kubelet[2658]: E0709 10:12:40.319618 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.319683 kubelet[2658]: W0709 10:12:40.319673 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.319751 kubelet[2658]: E0709 10:12:40.319738 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.320035 kubelet[2658]: E0709 10:12:40.319957 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.320035 kubelet[2658]: W0709 10:12:40.319970 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.320035 kubelet[2658]: E0709 10:12:40.319979 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.320035 kubelet[2658]: I0709 10:12:40.320003 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/80cd3875-f2cb-436b-b9bd-83f74aa24913-registration-dir\") pod \"csi-node-driver-gp5rv\" (UID: \"80cd3875-f2cb-436b-b9bd-83f74aa24913\") " pod="calico-system/csi-node-driver-gp5rv" Jul 9 10:12:40.320220 kubelet[2658]: E0709 10:12:40.320183 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.320220 kubelet[2658]: W0709 10:12:40.320200 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.320220 kubelet[2658]: E0709 10:12:40.320212 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.320381 kubelet[2658]: E0709 10:12:40.320369 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.320381 kubelet[2658]: W0709 10:12:40.320378 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.320432 kubelet[2658]: E0709 10:12:40.320386 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.320551 kubelet[2658]: E0709 10:12:40.320531 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.320551 kubelet[2658]: W0709 10:12:40.320542 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.320551 kubelet[2658]: E0709 10:12:40.320549 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.320627 kubelet[2658]: I0709 10:12:40.320570 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/80cd3875-f2cb-436b-b9bd-83f74aa24913-varrun\") pod \"csi-node-driver-gp5rv\" (UID: \"80cd3875-f2cb-436b-b9bd-83f74aa24913\") " pod="calico-system/csi-node-driver-gp5rv" Jul 9 10:12:40.320776 kubelet[2658]: E0709 10:12:40.320729 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.320776 kubelet[2658]: W0709 10:12:40.320742 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.320776 kubelet[2658]: E0709 10:12:40.320750 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.320776 kubelet[2658]: I0709 10:12:40.320768 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sxzd\" (UniqueName: \"kubernetes.io/projected/80cd3875-f2cb-436b-b9bd-83f74aa24913-kube-api-access-4sxzd\") pod \"csi-node-driver-gp5rv\" (UID: \"80cd3875-f2cb-436b-b9bd-83f74aa24913\") " pod="calico-system/csi-node-driver-gp5rv" Jul 9 10:12:40.321323 kubelet[2658]: E0709 10:12:40.320925 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.321323 kubelet[2658]: W0709 10:12:40.320946 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.321323 kubelet[2658]: E0709 10:12:40.320958 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.321323 kubelet[2658]: E0709 10:12:40.321183 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.321323 kubelet[2658]: W0709 10:12:40.321191 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.321323 kubelet[2658]: E0709 10:12:40.321199 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.321522 kubelet[2658]: E0709 10:12:40.321452 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.321522 kubelet[2658]: W0709 10:12:40.321463 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.321522 kubelet[2658]: E0709 10:12:40.321472 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.322561 kubelet[2658]: E0709 10:12:40.321788 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.322561 kubelet[2658]: W0709 10:12:40.321806 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.322561 kubelet[2658]: E0709 10:12:40.321818 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.322561 kubelet[2658]: E0709 10:12:40.322002 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.322561 kubelet[2658]: W0709 10:12:40.322011 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.322561 kubelet[2658]: E0709 10:12:40.322024 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.322561 kubelet[2658]: E0709 10:12:40.322201 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.322561 kubelet[2658]: W0709 10:12:40.322208 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.322561 kubelet[2658]: E0709 10:12:40.322215 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.331026 containerd[1513]: time="2025-07-09T10:12:40.330988603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2qr59,Uid:1c089a46-f39a-448b-b707-92d5021a1c1d,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:40.346422 containerd[1513]: time="2025-07-09T10:12:40.346294112Z" level=info msg="connecting to shim f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c" address="unix:///run/containerd/s/909956e6611281aa6e4a3e36422b064daf2d63640777ac252d9a14e94c5ec484" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:40.376877 systemd[1]: Started cri-containerd-f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c.scope - libcontainer container f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c. Jul 9 10:12:40.419112 containerd[1513]: time="2025-07-09T10:12:40.419063392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2qr59,Uid:1c089a46-f39a-448b-b707-92d5021a1c1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c\"" Jul 9 10:12:40.421623 kubelet[2658]: E0709 10:12:40.421598 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.421623 kubelet[2658]: W0709 10:12:40.421615 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.421623 kubelet[2658]: E0709 10:12:40.421633 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.422029 kubelet[2658]: E0709 10:12:40.421855 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.422029 kubelet[2658]: W0709 10:12:40.421865 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.422029 kubelet[2658]: E0709 10:12:40.421874 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.422096 kubelet[2658]: E0709 10:12:40.422071 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.422096 kubelet[2658]: W0709 10:12:40.422080 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.422096 kubelet[2658]: E0709 10:12:40.422087 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.423124 kubelet[2658]: E0709 10:12:40.423083 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.423124 kubelet[2658]: W0709 10:12:40.423102 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.423124 kubelet[2658]: E0709 10:12:40.423117 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.424616 kubelet[2658]: E0709 10:12:40.424587 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.424616 kubelet[2658]: W0709 10:12:40.424606 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.424726 kubelet[2658]: E0709 10:12:40.424620 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.424916 kubelet[2658]: E0709 10:12:40.424900 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.424916 kubelet[2658]: W0709 10:12:40.424914 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.424983 kubelet[2658]: E0709 10:12:40.424924 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.425124 kubelet[2658]: E0709 10:12:40.425110 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.425151 kubelet[2658]: W0709 10:12:40.425123 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.425151 kubelet[2658]: E0709 10:12:40.425133 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.425876 kubelet[2658]: E0709 10:12:40.425840 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.425876 kubelet[2658]: W0709 10:12:40.425857 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.425876 kubelet[2658]: E0709 10:12:40.425870 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.426727 kubelet[2658]: E0709 10:12:40.426456 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.426727 kubelet[2658]: W0709 10:12:40.426474 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.426727 kubelet[2658]: E0709 10:12:40.426487 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.427680 kubelet[2658]: E0709 10:12:40.427634 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.427680 kubelet[2658]: W0709 10:12:40.427651 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.427680 kubelet[2658]: E0709 10:12:40.427664 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.428855 kubelet[2658]: E0709 10:12:40.427921 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.428855 kubelet[2658]: W0709 10:12:40.427940 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.428855 kubelet[2658]: E0709 10:12:40.427951 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.428855 kubelet[2658]: E0709 10:12:40.428106 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.428855 kubelet[2658]: W0709 10:12:40.428114 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.428855 kubelet[2658]: E0709 10:12:40.428124 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.428982 kubelet[2658]: E0709 10:12:40.428946 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.428982 kubelet[2658]: W0709 10:12:40.428959 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.429018 kubelet[2658]: E0709 10:12:40.428983 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.430282 kubelet[2658]: E0709 10:12:40.430257 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.430282 kubelet[2658]: W0709 10:12:40.430273 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.430282 kubelet[2658]: E0709 10:12:40.430286 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.430626 kubelet[2658]: E0709 10:12:40.430588 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.430626 kubelet[2658]: W0709 10:12:40.430602 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.430626 kubelet[2658]: E0709 10:12:40.430613 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.431021 kubelet[2658]: E0709 10:12:40.430911 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.431021 kubelet[2658]: W0709 10:12:40.430971 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.431021 kubelet[2658]: E0709 10:12:40.430986 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.431760 kubelet[2658]: E0709 10:12:40.431372 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.431760 kubelet[2658]: W0709 10:12:40.431385 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.431760 kubelet[2658]: E0709 10:12:40.431396 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.431760 kubelet[2658]: E0709 10:12:40.431537 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.431760 kubelet[2658]: W0709 10:12:40.431545 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.431760 kubelet[2658]: E0709 10:12:40.431553 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.431760 kubelet[2658]: E0709 10:12:40.431745 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.431760 kubelet[2658]: W0709 10:12:40.431754 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.431760 kubelet[2658]: E0709 10:12:40.431763 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.431961 kubelet[2658]: E0709 10:12:40.431919 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.431961 kubelet[2658]: W0709 10:12:40.431927 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.431961 kubelet[2658]: E0709 10:12:40.431935 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.432222 kubelet[2658]: E0709 10:12:40.432188 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.432222 kubelet[2658]: W0709 10:12:40.432205 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.432222 kubelet[2658]: E0709 10:12:40.432215 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.433043 kubelet[2658]: E0709 10:12:40.433003 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.433043 kubelet[2658]: W0709 10:12:40.433022 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.433043 kubelet[2658]: E0709 10:12:40.433036 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.434254 kubelet[2658]: E0709 10:12:40.433945 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.434254 kubelet[2658]: W0709 10:12:40.433965 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.434254 kubelet[2658]: E0709 10:12:40.433977 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.435798 kubelet[2658]: E0709 10:12:40.435769 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.435798 kubelet[2658]: W0709 10:12:40.435792 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.435891 kubelet[2658]: E0709 10:12:40.435809 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.436052 kubelet[2658]: E0709 10:12:40.436032 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.436052 kubelet[2658]: W0709 10:12:40.436046 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.436100 kubelet[2658]: E0709 10:12:40.436056 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:40.448079 kubelet[2658]: E0709 10:12:40.448056 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:40.448079 kubelet[2658]: W0709 10:12:40.448077 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:40.448200 kubelet[2658]: E0709 10:12:40.448092 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:41.294165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143923801.mount: Deactivated successfully. Jul 9 10:12:41.801346 containerd[1513]: time="2025-07-09T10:12:41.801294039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:41.802446 containerd[1513]: time="2025-07-09T10:12:41.802412762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 9 10:12:41.803096 containerd[1513]: time="2025-07-09T10:12:41.803064721Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:41.804818 containerd[1513]: time="2025-07-09T10:12:41.804775832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:41.806016 containerd[1513]: time="2025-07-09T10:12:41.805912358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.590692575s" Jul 9 10:12:41.806016 containerd[1513]: time="2025-07-09T10:12:41.805952445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 9 10:12:41.812016 containerd[1513]: time="2025-07-09T10:12:41.811986101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 10:12:41.828440 containerd[1513]: time="2025-07-09T10:12:41.828388879Z" level=info msg="CreateContainer within sandbox \"6a2bcd20eeba67ea70c84b32890dca4b6160a32113c1196343178ba6461ffb7d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 9 10:12:41.833773 containerd[1513]: time="2025-07-09T10:12:41.833433876Z" level=info msg="Container 93a55611c6ff00f90f74c801a7aae05baea0294049c38da0f0dccbc2ad871358: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:41.839005 containerd[1513]: time="2025-07-09T10:12:41.838972201Z" level=info msg="CreateContainer within sandbox \"6a2bcd20eeba67ea70c84b32890dca4b6160a32113c1196343178ba6461ffb7d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"93a55611c6ff00f90f74c801a7aae05baea0294049c38da0f0dccbc2ad871358\"" Jul 9 10:12:41.839697 containerd[1513]: time="2025-07-09T10:12:41.839659526Z" level=info msg="StartContainer for \"93a55611c6ff00f90f74c801a7aae05baea0294049c38da0f0dccbc2ad871358\"" Jul 9 10:12:41.840640 containerd[1513]: time="2025-07-09T10:12:41.840613419Z" level=info msg="connecting to shim 93a55611c6ff00f90f74c801a7aae05baea0294049c38da0f0dccbc2ad871358" address="unix:///run/containerd/s/3733d27c0aa5fde4d192e7a3b0885d724d6f3b2bdcc06dc283a177cc8122a380" protocol=ttrpc version=3 Jul 9 10:12:41.856864 systemd[1]: Started cri-containerd-93a55611c6ff00f90f74c801a7aae05baea0294049c38da0f0dccbc2ad871358.scope - libcontainer container 93a55611c6ff00f90f74c801a7aae05baea0294049c38da0f0dccbc2ad871358. Jul 9 10:12:41.890978 containerd[1513]: time="2025-07-09T10:12:41.890631062Z" level=info msg="StartContainer for \"93a55611c6ff00f90f74c801a7aae05baea0294049c38da0f0dccbc2ad871358\" returns successfully" Jul 9 10:12:42.111656 kubelet[2658]: E0709 10:12:42.111546 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp5rv" podUID="80cd3875-f2cb-436b-b9bd-83f74aa24913" Jul 9 10:12:42.225653 kubelet[2658]: E0709 10:12:42.225380 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.225653 kubelet[2658]: W0709 10:12:42.225402 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.225653 kubelet[2658]: E0709 10:12:42.225419 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.225653 kubelet[2658]: I0709 10:12:42.225407 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b8d8cf9ff-bwr8t" podStartSLOduration=1.629047979 podStartE2EDuration="3.225349535s" podCreationTimestamp="2025-07-09 10:12:39 +0000 UTC" firstStartedPulling="2025-07-09 10:12:40.214815587 +0000 UTC m=+18.187261451" lastFinishedPulling="2025-07-09 10:12:41.811117143 +0000 UTC m=+19.783563007" observedRunningTime="2025-07-09 10:12:42.224006219 +0000 UTC m=+20.196452083" watchObservedRunningTime="2025-07-09 10:12:42.225349535 +0000 UTC m=+20.197795399" Jul 9 10:12:42.225926 kubelet[2658]: E0709 10:12:42.225783 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.225926 kubelet[2658]: W0709 10:12:42.225795 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.225926 kubelet[2658]: E0709 10:12:42.225835 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.226313 kubelet[2658]: E0709 10:12:42.225999 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.226313 kubelet[2658]: W0709 10:12:42.226009 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.226313 kubelet[2658]: E0709 10:12:42.226018 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.226623 kubelet[2658]: E0709 10:12:42.226609 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.226623 kubelet[2658]: W0709 10:12:42.226622 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.226678 kubelet[2658]: E0709 10:12:42.226632 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.226819 kubelet[2658]: E0709 10:12:42.226801 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.226819 kubelet[2658]: W0709 10:12:42.226814 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.226883 kubelet[2658]: E0709 10:12:42.226822 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.227032 kubelet[2658]: E0709 10:12:42.227006 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.227032 kubelet[2658]: W0709 10:12:42.227017 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.227032 kubelet[2658]: E0709 10:12:42.227026 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.227210 kubelet[2658]: E0709 10:12:42.227193 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.227210 kubelet[2658]: W0709 10:12:42.227203 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.227553 kubelet[2658]: E0709 10:12:42.227228 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.227795 kubelet[2658]: E0709 10:12:42.227769 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.227795 kubelet[2658]: W0709 10:12:42.227788 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.227795 kubelet[2658]: E0709 10:12:42.227797 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.227996 kubelet[2658]: E0709 10:12:42.227979 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.227996 kubelet[2658]: W0709 10:12:42.227991 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.228065 kubelet[2658]: E0709 10:12:42.228001 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.228243 kubelet[2658]: E0709 10:12:42.228223 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.228243 kubelet[2658]: W0709 10:12:42.228237 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.228243 kubelet[2658]: E0709 10:12:42.228247 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.228452 kubelet[2658]: E0709 10:12:42.228420 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.228452 kubelet[2658]: W0709 10:12:42.228432 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.228452 kubelet[2658]: E0709 10:12:42.228440 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.228618 kubelet[2658]: E0709 10:12:42.228596 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.228699 kubelet[2658]: W0709 10:12:42.228608 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.228699 kubelet[2658]: E0709 10:12:42.228690 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.228951 kubelet[2658]: E0709 10:12:42.228913 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.228951 kubelet[2658]: W0709 10:12:42.228926 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.228951 kubelet[2658]: E0709 10:12:42.228935 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.229833 kubelet[2658]: E0709 10:12:42.229136 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.229833 kubelet[2658]: W0709 10:12:42.229143 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.229833 kubelet[2658]: E0709 10:12:42.229151 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.229833 kubelet[2658]: E0709 10:12:42.229274 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.229833 kubelet[2658]: W0709 10:12:42.229281 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.229833 kubelet[2658]: E0709 10:12:42.229288 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.240173 kubelet[2658]: E0709 10:12:42.240139 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.240173 kubelet[2658]: W0709 10:12:42.240160 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.240173 kubelet[2658]: E0709 10:12:42.240174 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.240369 kubelet[2658]: E0709 10:12:42.240337 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.240369 kubelet[2658]: W0709 10:12:42.240348 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.240369 kubelet[2658]: E0709 10:12:42.240355 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.240525 kubelet[2658]: E0709 10:12:42.240502 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.240525 kubelet[2658]: W0709 10:12:42.240513 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.240525 kubelet[2658]: E0709 10:12:42.240520 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.240776 kubelet[2658]: E0709 10:12:42.240748 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.240776 kubelet[2658]: W0709 10:12:42.240774 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.240837 kubelet[2658]: E0709 10:12:42.240787 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.240979 kubelet[2658]: E0709 10:12:42.240940 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.240979 kubelet[2658]: W0709 10:12:42.240959 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.240979 kubelet[2658]: E0709 10:12:42.240967 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.241095 kubelet[2658]: E0709 10:12:42.241085 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.241121 kubelet[2658]: W0709 10:12:42.241095 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.241121 kubelet[2658]: E0709 10:12:42.241102 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.241282 kubelet[2658]: E0709 10:12:42.241271 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.241307 kubelet[2658]: W0709 10:12:42.241282 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.241307 kubelet[2658]: E0709 10:12:42.241290 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.241994 kubelet[2658]: E0709 10:12:42.241963 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.241994 kubelet[2658]: W0709 10:12:42.241993 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.242058 kubelet[2658]: E0709 10:12:42.242006 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.243828 kubelet[2658]: E0709 10:12:42.243796 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.243828 kubelet[2658]: W0709 10:12:42.243811 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.243828 kubelet[2658]: E0709 10:12:42.243821 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.244061 kubelet[2658]: E0709 10:12:42.244045 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.244061 kubelet[2658]: W0709 10:12:42.244056 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.244136 kubelet[2658]: E0709 10:12:42.244067 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.244248 kubelet[2658]: E0709 10:12:42.244230 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.244248 kubelet[2658]: W0709 10:12:42.244242 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.244302 kubelet[2658]: E0709 10:12:42.244251 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.244409 kubelet[2658]: E0709 10:12:42.244389 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.244409 kubelet[2658]: W0709 10:12:42.244398 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.244409 kubelet[2658]: E0709 10:12:42.244404 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.245118 kubelet[2658]: E0709 10:12:42.245081 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.245118 kubelet[2658]: W0709 10:12:42.245107 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.245118 kubelet[2658]: E0709 10:12:42.245120 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.245828 kubelet[2658]: E0709 10:12:42.245798 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.245828 kubelet[2658]: W0709 10:12:42.245815 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.245828 kubelet[2658]: E0709 10:12:42.245826 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.246784 kubelet[2658]: E0709 10:12:42.246739 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.246784 kubelet[2658]: W0709 10:12:42.246762 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.246784 kubelet[2658]: E0709 10:12:42.246774 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.246966 kubelet[2658]: E0709 10:12:42.246952 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.246966 kubelet[2658]: W0709 10:12:42.246964 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.247055 kubelet[2658]: E0709 10:12:42.246974 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.247262 kubelet[2658]: E0709 10:12:42.247232 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.247262 kubelet[2658]: W0709 10:12:42.247245 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.247262 kubelet[2658]: E0709 10:12:42.247255 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:42.248023 kubelet[2658]: E0709 10:12:42.248001 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 10:12:42.248023 kubelet[2658]: W0709 10:12:42.248015 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 10:12:42.248023 kubelet[2658]: E0709 10:12:42.248027 2658 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 10:12:43.025103 containerd[1513]: time="2025-07-09T10:12:43.025057243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:43.025650 containerd[1513]: time="2025-07-09T10:12:43.025624820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 9 10:12:43.026265 containerd[1513]: time="2025-07-09T10:12:43.026240725Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:43.028209 containerd[1513]: time="2025-07-09T10:12:43.028179455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:43.028722 containerd[1513]: time="2025-07-09T10:12:43.028680701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.216390945s" Jul 9 10:12:43.028773 containerd[1513]: time="2025-07-09T10:12:43.028732910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 9 10:12:43.034414 containerd[1513]: time="2025-07-09T10:12:43.034363429Z" level=info msg="CreateContainer within sandbox \"f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 10:12:43.040745 containerd[1513]: time="2025-07-09T10:12:43.040504516Z" level=info msg="Container 21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:43.060191 containerd[1513]: time="2025-07-09T10:12:43.060136221Z" level=info msg="CreateContainer within sandbox \"f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac\"" Jul 9 10:12:43.060663 containerd[1513]: time="2025-07-09T10:12:43.060637227Z" level=info msg="StartContainer for \"21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac\"" Jul 9 10:12:43.062677 containerd[1513]: time="2025-07-09T10:12:43.062305351Z" level=info msg="connecting to shim 21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac" address="unix:///run/containerd/s/909956e6611281aa6e4a3e36422b064daf2d63640777ac252d9a14e94c5ec484" protocol=ttrpc version=3 Jul 9 10:12:43.084947 systemd[1]: Started cri-containerd-21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac.scope - libcontainer container 21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac. Jul 9 10:12:43.126932 containerd[1513]: time="2025-07-09T10:12:43.125197709Z" level=info msg="StartContainer for \"21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac\" returns successfully" Jul 9 10:12:43.142207 systemd[1]: cri-containerd-21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac.scope: Deactivated successfully. Jul 9 10:12:43.174261 containerd[1513]: time="2025-07-09T10:12:43.174203861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac\" id:\"21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac\" pid:3355 exited_at:{seconds:1752055963 nanos:155044836}" Jul 9 10:12:43.174261 containerd[1513]: time="2025-07-09T10:12:43.174252989Z" level=info msg="received exit event container_id:\"21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac\" id:\"21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac\" pid:3355 exited_at:{seconds:1752055963 nanos:155044836}" Jul 9 10:12:43.207371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21b34d369eb5831c8f547edb9bb4b5d7e8c770b63a627da380a8178ba843deac-rootfs.mount: Deactivated successfully. Jul 9 10:12:43.215265 kubelet[2658]: I0709 10:12:43.215244 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 10:12:44.110998 kubelet[2658]: E0709 10:12:44.110720 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp5rv" podUID="80cd3875-f2cb-436b-b9bd-83f74aa24913" Jul 9 10:12:44.220731 containerd[1513]: time="2025-07-09T10:12:44.220315811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 10:12:46.120998 kubelet[2658]: E0709 10:12:46.120367 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp5rv" podUID="80cd3875-f2cb-436b-b9bd-83f74aa24913" Jul 9 10:12:46.615140 containerd[1513]: time="2025-07-09T10:12:46.615094741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:46.616158 containerd[1513]: time="2025-07-09T10:12:46.616068531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 9 10:12:46.617163 containerd[1513]: time="2025-07-09T10:12:46.617111733Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:46.621726 containerd[1513]: time="2025-07-09T10:12:46.619513225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:46.621726 containerd[1513]: time="2025-07-09T10:12:46.621069146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.400706968s" Jul 9 10:12:46.621726 containerd[1513]: time="2025-07-09T10:12:46.621105992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 9 10:12:46.626474 containerd[1513]: time="2025-07-09T10:12:46.626426136Z" level=info msg="CreateContainer within sandbox \"f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 10:12:46.647009 containerd[1513]: time="2025-07-09T10:12:46.645894433Z" level=info msg="Container 69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:46.653985 containerd[1513]: time="2025-07-09T10:12:46.653927117Z" level=info msg="CreateContainer within sandbox \"f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224\"" Jul 9 10:12:46.665482 containerd[1513]: time="2025-07-09T10:12:46.665434300Z" level=info msg="StartContainer for \"69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224\"" Jul 9 10:12:46.667096 containerd[1513]: time="2025-07-09T10:12:46.667047390Z" level=info msg="connecting to shim 69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224" address="unix:///run/containerd/s/909956e6611281aa6e4a3e36422b064daf2d63640777ac252d9a14e94c5ec484" protocol=ttrpc version=3 Jul 9 10:12:46.689947 systemd[1]: Started cri-containerd-69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224.scope - libcontainer container 69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224. Jul 9 10:12:46.730397 containerd[1513]: time="2025-07-09T10:12:46.730357719Z" level=info msg="StartContainer for \"69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224\" returns successfully" Jul 9 10:12:47.324930 systemd[1]: cri-containerd-69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224.scope: Deactivated successfully. Jul 9 10:12:47.325429 systemd[1]: cri-containerd-69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224.scope: Consumed 479ms CPU time, 173.8M memory peak, 1.6M read from disk, 165.8M written to disk. Jul 9 10:12:47.326080 containerd[1513]: time="2025-07-09T10:12:47.325919621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224\" id:\"69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224\" pid:3414 exited_at:{seconds:1752055967 nanos:325563888}" Jul 9 10:12:47.326337 containerd[1513]: time="2025-07-09T10:12:47.326241950Z" level=info msg="received exit event container_id:\"69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224\" id:\"69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224\" pid:3414 exited_at:{seconds:1752055967 nanos:325563888}" Jul 9 10:12:47.347760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69b8702ef2438b51d384d7e743fcebbbe48dffc42f81b3b777e5cec372c67224-rootfs.mount: Deactivated successfully. Jul 9 10:12:47.371463 kubelet[2658]: I0709 10:12:47.371422 2658 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 10:12:47.515926 systemd[1]: Created slice kubepods-besteffort-pod6529ece6_b02f_4f4a_9995_1fcc8f528fce.slice - libcontainer container kubepods-besteffort-pod6529ece6_b02f_4f4a_9995_1fcc8f528fce.slice. Jul 9 10:12:47.520217 systemd[1]: Created slice kubepods-burstable-pod74848606_f18c_4818_9f61_312c9532eeeb.slice - libcontainer container kubepods-burstable-pod74848606_f18c_4818_9f61_312c9532eeeb.slice. Jul 9 10:12:47.535238 systemd[1]: Created slice kubepods-burstable-pod8e450753_85f6_41de_ba56_37afb800c8c3.slice - libcontainer container kubepods-burstable-pod8e450753_85f6_41de_ba56_37afb800c8c3.slice. Jul 9 10:12:47.549986 systemd[1]: Created slice kubepods-besteffort-pod26ced58c_aa95_4329_9c3b_e15631430951.slice - libcontainer container kubepods-besteffort-pod26ced58c_aa95_4329_9c3b_e15631430951.slice. Jul 9 10:12:47.555514 systemd[1]: Created slice kubepods-besteffort-poda3c122d0_f6de_49bd_9842_5042fcddecbc.slice - libcontainer container kubepods-besteffort-poda3c122d0_f6de_49bd_9842_5042fcddecbc.slice. Jul 9 10:12:47.560111 systemd[1]: Created slice kubepods-besteffort-pod5e10e32b_6714_4d36_bb91_3840e2971150.slice - libcontainer container kubepods-besteffort-pod5e10e32b_6714_4d36_bb91_3840e2971150.slice. Jul 9 10:12:47.566797 systemd[1]: Created slice kubepods-besteffort-pod14eaede5_5e62_4c91_a43a_7a8023870df4.slice - libcontainer container kubepods-besteffort-pod14eaede5_5e62_4c91_a43a_7a8023870df4.slice. Jul 9 10:12:47.577462 kubelet[2658]: I0709 10:12:47.577314 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e10e32b-6714-4d36-bb91-3840e2971150-config\") pod \"goldmane-768f4c5c69-nl7w8\" (UID: \"5e10e32b-6714-4d36-bb91-3840e2971150\") " pod="calico-system/goldmane-768f4c5c69-nl7w8" Jul 9 10:12:47.577718 kubelet[2658]: I0709 10:12:47.577653 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6529ece6-b02f-4f4a-9995-1fcc8f528fce-calico-apiserver-certs\") pod \"calico-apiserver-57ccb5d897-zj5gr\" (UID: \"6529ece6-b02f-4f4a-9995-1fcc8f528fce\") " pod="calico-apiserver/calico-apiserver-57ccb5d897-zj5gr" Jul 9 10:12:47.577888 kubelet[2658]: I0709 10:12:47.577815 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4vc\" (UniqueName: \"kubernetes.io/projected/14eaede5-5e62-4c91-a43a-7a8023870df4-kube-api-access-md4vc\") pod \"whisker-8c7c4db58-zs7vp\" (UID: \"14eaede5-5e62-4c91-a43a-7a8023870df4\") " pod="calico-system/whisker-8c7c4db58-zs7vp" Jul 9 10:12:47.577888 kubelet[2658]: I0709 10:12:47.577840 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dhq\" (UniqueName: \"kubernetes.io/projected/8e450753-85f6-41de-ba56-37afb800c8c3-kube-api-access-75dhq\") pod \"coredns-674b8bbfcf-ftcwk\" (UID: \"8e450753-85f6-41de-ba56-37afb800c8c3\") " pod="kube-system/coredns-674b8bbfcf-ftcwk" Jul 9 10:12:47.578001 kubelet[2658]: I0709 10:12:47.577861 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw6vl\" (UniqueName: \"kubernetes.io/projected/5e10e32b-6714-4d36-bb91-3840e2971150-kube-api-access-vw6vl\") pod \"goldmane-768f4c5c69-nl7w8\" (UID: \"5e10e32b-6714-4d36-bb91-3840e2971150\") " pod="calico-system/goldmane-768f4c5c69-nl7w8" Jul 9 10:12:47.578120 kubelet[2658]: I0709 10:12:47.578071 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3c122d0-f6de-49bd-9842-5042fcddecbc-tigera-ca-bundle\") pod \"calico-kube-controllers-5ffb5d8977-4vhmw\" (UID: \"a3c122d0-f6de-49bd-9842-5042fcddecbc\") " pod="calico-system/calico-kube-controllers-5ffb5d8977-4vhmw" Jul 9 10:12:47.578120 kubelet[2658]: I0709 10:12:47.578096 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14eaede5-5e62-4c91-a43a-7a8023870df4-whisker-ca-bundle\") pod \"whisker-8c7c4db58-zs7vp\" (UID: \"14eaede5-5e62-4c91-a43a-7a8023870df4\") " pod="calico-system/whisker-8c7c4db58-zs7vp" Jul 9 10:12:47.578284 kubelet[2658]: I0709 10:12:47.578230 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e450753-85f6-41de-ba56-37afb800c8c3-config-volume\") pod \"coredns-674b8bbfcf-ftcwk\" (UID: \"8e450753-85f6-41de-ba56-37afb800c8c3\") " pod="kube-system/coredns-674b8bbfcf-ftcwk" Jul 9 10:12:47.578284 kubelet[2658]: I0709 10:12:47.578255 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74848606-f18c-4818-9f61-312c9532eeeb-config-volume\") pod \"coredns-674b8bbfcf-2c74h\" (UID: \"74848606-f18c-4818-9f61-312c9532eeeb\") " pod="kube-system/coredns-674b8bbfcf-2c74h" Jul 9 10:12:47.578367 kubelet[2658]: I0709 10:12:47.578355 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e10e32b-6714-4d36-bb91-3840e2971150-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-nl7w8\" (UID: \"5e10e32b-6714-4d36-bb91-3840e2971150\") " pod="calico-system/goldmane-768f4c5c69-nl7w8" Jul 9 10:12:47.578483 kubelet[2658]: I0709 10:12:47.578426 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5e10e32b-6714-4d36-bb91-3840e2971150-goldmane-key-pair\") pod \"goldmane-768f4c5c69-nl7w8\" (UID: \"5e10e32b-6714-4d36-bb91-3840e2971150\") " pod="calico-system/goldmane-768f4c5c69-nl7w8" Jul 9 10:12:47.578483 kubelet[2658]: I0709 10:12:47.578448 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln7lb\" (UniqueName: \"kubernetes.io/projected/74848606-f18c-4818-9f61-312c9532eeeb-kube-api-access-ln7lb\") pod \"coredns-674b8bbfcf-2c74h\" (UID: \"74848606-f18c-4818-9f61-312c9532eeeb\") " pod="kube-system/coredns-674b8bbfcf-2c74h" Jul 9 10:12:47.578483 kubelet[2658]: I0709 10:12:47.578464 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kr5s\" (UniqueName: \"kubernetes.io/projected/a3c122d0-f6de-49bd-9842-5042fcddecbc-kube-api-access-9kr5s\") pod \"calico-kube-controllers-5ffb5d8977-4vhmw\" (UID: \"a3c122d0-f6de-49bd-9842-5042fcddecbc\") " pod="calico-system/calico-kube-controllers-5ffb5d8977-4vhmw" Jul 9 10:12:47.578642 kubelet[2658]: I0709 10:12:47.578612 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdn9\" (UniqueName: \"kubernetes.io/projected/6529ece6-b02f-4f4a-9995-1fcc8f528fce-kube-api-access-xtdn9\") pod \"calico-apiserver-57ccb5d897-zj5gr\" (UID: \"6529ece6-b02f-4f4a-9995-1fcc8f528fce\") " pod="calico-apiserver/calico-apiserver-57ccb5d897-zj5gr" Jul 9 10:12:47.578740 kubelet[2658]: I0709 10:12:47.578685 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26ced58c-aa95-4329-9c3b-e15631430951-calico-apiserver-certs\") pod \"calico-apiserver-57ccb5d897-tkhzm\" (UID: \"26ced58c-aa95-4329-9c3b-e15631430951\") " pod="calico-apiserver/calico-apiserver-57ccb5d897-tkhzm" Jul 9 10:12:47.578782 kubelet[2658]: I0709 10:12:47.578741 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14eaede5-5e62-4c91-a43a-7a8023870df4-whisker-backend-key-pair\") pod \"whisker-8c7c4db58-zs7vp\" (UID: \"14eaede5-5e62-4c91-a43a-7a8023870df4\") " pod="calico-system/whisker-8c7c4db58-zs7vp" Jul 9 10:12:47.578782 kubelet[2658]: I0709 10:12:47.578763 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqh76\" (UniqueName: \"kubernetes.io/projected/26ced58c-aa95-4329-9c3b-e15631430951-kube-api-access-sqh76\") pod \"calico-apiserver-57ccb5d897-tkhzm\" (UID: \"26ced58c-aa95-4329-9c3b-e15631430951\") " pod="calico-apiserver/calico-apiserver-57ccb5d897-tkhzm" Jul 9 10:12:47.825125 containerd[1513]: time="2025-07-09T10:12:47.825082945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb5d897-zj5gr,Uid:6529ece6-b02f-4f4a-9995-1fcc8f528fce,Namespace:calico-apiserver,Attempt:0,}" Jul 9 10:12:47.825459 containerd[1513]: time="2025-07-09T10:12:47.825387991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2c74h,Uid:74848606-f18c-4818-9f61-312c9532eeeb,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:47.852809 containerd[1513]: time="2025-07-09T10:12:47.852298630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ftcwk,Uid:8e450753-85f6-41de-ba56-37afb800c8c3,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:47.867362 containerd[1513]: time="2025-07-09T10:12:47.866296971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb5d897-tkhzm,Uid:26ced58c-aa95-4329-9c3b-e15631430951,Namespace:calico-apiserver,Attempt:0,}" Jul 9 10:12:47.867362 containerd[1513]: time="2025-07-09T10:12:47.866341138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ffb5d8977-4vhmw,Uid:a3c122d0-f6de-49bd-9842-5042fcddecbc,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:47.867362 containerd[1513]: time="2025-07-09T10:12:47.867199067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nl7w8,Uid:5e10e32b-6714-4d36-bb91-3840e2971150,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:47.878620 containerd[1513]: time="2025-07-09T10:12:47.878226642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c7c4db58-zs7vp,Uid:14eaede5-5e62-4c91-a43a-7a8023870df4,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:48.139485 systemd[1]: Created slice kubepods-besteffort-pod80cd3875_f2cb_436b_b9bd_83f74aa24913.slice - libcontainer container kubepods-besteffort-pod80cd3875_f2cb_436b_b9bd_83f74aa24913.slice. Jul 9 10:12:48.155412 containerd[1513]: time="2025-07-09T10:12:48.149462423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp5rv,Uid:80cd3875-f2cb-436b-b9bd-83f74aa24913,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:48.247378 containerd[1513]: time="2025-07-09T10:12:48.246802178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 10:12:48.282219 containerd[1513]: time="2025-07-09T10:12:48.282136676Z" level=error msg="Failed to destroy network for sandbox \"52a2842999f90a3332feb0e559ae5267b3bdd814870538ea086afabd11d31a66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.282359 containerd[1513]: time="2025-07-09T10:12:48.282265734Z" level=error msg="Failed to destroy network for sandbox \"2d0da588b662f6826d984ec700831e545476c60f2c39f4be2810a06a330c1a5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.291727 containerd[1513]: time="2025-07-09T10:12:48.291546644Z" level=error msg="Failed to destroy network for sandbox \"f4a325ae46b69d97c79880a5fe77a7e51c4c1aac6b2e966719b85077bcf5512f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.293170 containerd[1513]: time="2025-07-09T10:12:48.293125913Z" level=error msg="Failed to destroy network for sandbox \"64dd152c93a566ecfbef9e6549d77782a93aa47af9d934905372193ec29fa861\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.299703 containerd[1513]: time="2025-07-09T10:12:48.299639181Z" level=error msg="Failed to destroy network for sandbox \"354dc7b06cb29adebb5a10527ed2c02aba8149d094e98880fcf3b52d9ddb802c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.300516 containerd[1513]: time="2025-07-09T10:12:48.300400691Z" level=error msg="Failed to destroy network for sandbox \"1050c3888eb3567d130ab602efeb7fd7b5f0f7893ae14d0f259d3c35525bae6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.301938 containerd[1513]: time="2025-07-09T10:12:48.301887027Z" level=error msg="Failed to destroy network for sandbox \"fbd98b72793648ef1913417ae41bd063b66842d06a64e20e810fc44702078195\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.303327 containerd[1513]: time="2025-07-09T10:12:48.303273909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb5d897-tkhzm,Uid:26ced58c-aa95-4329-9c3b-e15631430951,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a2842999f90a3332feb0e559ae5267b3bdd814870538ea086afabd11d31a66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.303659 kubelet[2658]: E0709 10:12:48.303615 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a2842999f90a3332feb0e559ae5267b3bdd814870538ea086afabd11d31a66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.307773 containerd[1513]: time="2025-07-09T10:12:48.307722876Z" level=error msg="Failed to destroy network for sandbox \"7f54e855fe78e8a56e0a5d118ea3617f135f545fb4614ca8e9ef9da187cc5f80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.309884 kubelet[2658]: E0709 10:12:48.309827 2658 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a2842999f90a3332feb0e559ae5267b3bdd814870538ea086afabd11d31a66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb5d897-tkhzm" Jul 9 10:12:48.309975 kubelet[2658]: E0709 10:12:48.309889 2658 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a2842999f90a3332feb0e559ae5267b3bdd814870538ea086afabd11d31a66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb5d897-tkhzm" Jul 9 10:12:48.309975 kubelet[2658]: E0709 10:12:48.309957 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57ccb5d897-tkhzm_calico-apiserver(26ced58c-aa95-4329-9c3b-e15631430951)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57ccb5d897-tkhzm_calico-apiserver(26ced58c-aa95-4329-9c3b-e15631430951)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52a2842999f90a3332feb0e559ae5267b3bdd814870538ea086afabd11d31a66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57ccb5d897-tkhzm" podUID="26ced58c-aa95-4329-9c3b-e15631430951" Jul 9 10:12:48.313413 containerd[1513]: time="2025-07-09T10:12:48.313356895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ftcwk,Uid:8e450753-85f6-41de-ba56-37afb800c8c3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d0da588b662f6826d984ec700831e545476c60f2c39f4be2810a06a330c1a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.313645 kubelet[2658]: E0709 10:12:48.313601 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d0da588b662f6826d984ec700831e545476c60f2c39f4be2810a06a330c1a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.313702 kubelet[2658]: E0709 10:12:48.313667 2658 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d0da588b662f6826d984ec700831e545476c60f2c39f4be2810a06a330c1a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ftcwk" Jul 9 10:12:48.313702 kubelet[2658]: E0709 10:12:48.313685 2658 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d0da588b662f6826d984ec700831e545476c60f2c39f4be2810a06a330c1a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ftcwk" Jul 9 10:12:48.313794 kubelet[2658]: E0709 10:12:48.313759 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ftcwk_kube-system(8e450753-85f6-41de-ba56-37afb800c8c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ftcwk_kube-system(8e450753-85f6-41de-ba56-37afb800c8c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d0da588b662f6826d984ec700831e545476c60f2c39f4be2810a06a330c1a5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ftcwk" podUID="8e450753-85f6-41de-ba56-37afb800c8c3" Jul 9 10:12:48.315116 containerd[1513]: time="2025-07-09T10:12:48.314679287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nl7w8,Uid:5e10e32b-6714-4d36-bb91-3840e2971150,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4a325ae46b69d97c79880a5fe77a7e51c4c1aac6b2e966719b85077bcf5512f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.315346 kubelet[2658]: E0709 10:12:48.315312 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4a325ae46b69d97c79880a5fe77a7e51c4c1aac6b2e966719b85077bcf5512f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.315397 kubelet[2658]: E0709 10:12:48.315375 2658 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4a325ae46b69d97c79880a5fe77a7e51c4c1aac6b2e966719b85077bcf5512f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nl7w8" Jul 9 10:12:48.315428 kubelet[2658]: E0709 10:12:48.315393 2658 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4a325ae46b69d97c79880a5fe77a7e51c4c1aac6b2e966719b85077bcf5512f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nl7w8" Jul 9 10:12:48.315449 kubelet[2658]: E0709 10:12:48.315429 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-nl7w8_calico-system(5e10e32b-6714-4d36-bb91-3840e2971150)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-nl7w8_calico-system(5e10e32b-6714-4d36-bb91-3840e2971150)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4a325ae46b69d97c79880a5fe77a7e51c4c1aac6b2e966719b85077bcf5512f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nl7w8" podUID="5e10e32b-6714-4d36-bb91-3840e2971150" Jul 9 10:12:48.315633 containerd[1513]: time="2025-07-09T10:12:48.315545493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ffb5d8977-4vhmw,Uid:a3c122d0-f6de-49bd-9842-5042fcddecbc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64dd152c93a566ecfbef9e6549d77782a93aa47af9d934905372193ec29fa861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.316749 kubelet[2658]: E0709 10:12:48.316624 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64dd152c93a566ecfbef9e6549d77782a93aa47af9d934905372193ec29fa861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.316749 kubelet[2658]: E0709 10:12:48.316668 2658 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64dd152c93a566ecfbef9e6549d77782a93aa47af9d934905372193ec29fa861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ffb5d8977-4vhmw" Jul 9 10:12:48.316749 kubelet[2658]: E0709 10:12:48.316688 2658 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64dd152c93a566ecfbef9e6549d77782a93aa47af9d934905372193ec29fa861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ffb5d8977-4vhmw" Jul 9 10:12:48.316927 kubelet[2658]: E0709 10:12:48.316764 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5ffb5d8977-4vhmw_calico-system(a3c122d0-f6de-49bd-9842-5042fcddecbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5ffb5d8977-4vhmw_calico-system(a3c122d0-f6de-49bd-9842-5042fcddecbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64dd152c93a566ecfbef9e6549d77782a93aa47af9d934905372193ec29fa861\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ffb5d8977-4vhmw" podUID="a3c122d0-f6de-49bd-9842-5042fcddecbc" Jul 9 10:12:48.317687 containerd[1513]: time="2025-07-09T10:12:48.317586550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2c74h,Uid:74848606-f18c-4818-9f61-312c9532eeeb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"354dc7b06cb29adebb5a10527ed2c02aba8149d094e98880fcf3b52d9ddb802c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.317956 kubelet[2658]: E0709 10:12:48.317923 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354dc7b06cb29adebb5a10527ed2c02aba8149d094e98880fcf3b52d9ddb802c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.317994 kubelet[2658]: E0709 10:12:48.317968 2658 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354dc7b06cb29adebb5a10527ed2c02aba8149d094e98880fcf3b52d9ddb802c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2c74h" Jul 9 10:12:48.318217 kubelet[2658]: E0709 10:12:48.317984 2658 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354dc7b06cb29adebb5a10527ed2c02aba8149d094e98880fcf3b52d9ddb802c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2c74h" Jul 9 10:12:48.318217 kubelet[2658]: E0709 10:12:48.318046 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2c74h_kube-system(74848606-f18c-4818-9f61-312c9532eeeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2c74h_kube-system(74848606-f18c-4818-9f61-312c9532eeeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"354dc7b06cb29adebb5a10527ed2c02aba8149d094e98880fcf3b52d9ddb802c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2c74h" podUID="74848606-f18c-4818-9f61-312c9532eeeb" Jul 9 10:12:48.318359 containerd[1513]: time="2025-07-09T10:12:48.318130349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c7c4db58-zs7vp,Uid:14eaede5-5e62-4c91-a43a-7a8023870df4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1050c3888eb3567d130ab602efeb7fd7b5f0f7893ae14d0f259d3c35525bae6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.318937 kubelet[2658]: E0709 10:12:48.318833 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1050c3888eb3567d130ab602efeb7fd7b5f0f7893ae14d0f259d3c35525bae6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.318937 kubelet[2658]: E0709 10:12:48.318874 2658 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1050c3888eb3567d130ab602efeb7fd7b5f0f7893ae14d0f259d3c35525bae6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8c7c4db58-zs7vp" Jul 9 10:12:48.318937 kubelet[2658]: E0709 10:12:48.318890 2658 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1050c3888eb3567d130ab602efeb7fd7b5f0f7893ae14d0f259d3c35525bae6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8c7c4db58-zs7vp" Jul 9 10:12:48.319100 containerd[1513]: time="2025-07-09T10:12:48.318878218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb5d897-zj5gr,Uid:6529ece6-b02f-4f4a-9995-1fcc8f528fce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbd98b72793648ef1913417ae41bd063b66842d06a64e20e810fc44702078195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.319145 kubelet[2658]: E0709 10:12:48.318931 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8c7c4db58-zs7vp_calico-system(14eaede5-5e62-4c91-a43a-7a8023870df4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8c7c4db58-zs7vp_calico-system(14eaede5-5e62-4c91-a43a-7a8023870df4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1050c3888eb3567d130ab602efeb7fd7b5f0f7893ae14d0f259d3c35525bae6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8c7c4db58-zs7vp" podUID="14eaede5-5e62-4c91-a43a-7a8023870df4" Jul 9 10:12:48.319145 kubelet[2658]: E0709 10:12:48.319017 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbd98b72793648ef1913417ae41bd063b66842d06a64e20e810fc44702078195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.319145 kubelet[2658]: E0709 10:12:48.319057 2658 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbd98b72793648ef1913417ae41bd063b66842d06a64e20e810fc44702078195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb5d897-zj5gr" Jul 9 10:12:48.319231 kubelet[2658]: E0709 10:12:48.319073 2658 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbd98b72793648ef1913417ae41bd063b66842d06a64e20e810fc44702078195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb5d897-zj5gr" Jul 9 10:12:48.319231 kubelet[2658]: E0709 10:12:48.319114 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57ccb5d897-zj5gr_calico-apiserver(6529ece6-b02f-4f4a-9995-1fcc8f528fce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57ccb5d897-zj5gr_calico-apiserver(6529ece6-b02f-4f4a-9995-1fcc8f528fce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbd98b72793648ef1913417ae41bd063b66842d06a64e20e810fc44702078195\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57ccb5d897-zj5gr" podUID="6529ece6-b02f-4f4a-9995-1fcc8f528fce" Jul 9 10:12:48.319895 containerd[1513]: time="2025-07-09T10:12:48.319799312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp5rv,Uid:80cd3875-f2cb-436b-b9bd-83f74aa24913,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f54e855fe78e8a56e0a5d118ea3617f135f545fb4614ca8e9ef9da187cc5f80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.320081 kubelet[2658]: E0709 10:12:48.320045 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f54e855fe78e8a56e0a5d118ea3617f135f545fb4614ca8e9ef9da187cc5f80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 10:12:48.320132 kubelet[2658]: E0709 10:12:48.320115 2658 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f54e855fe78e8a56e0a5d118ea3617f135f545fb4614ca8e9ef9da187cc5f80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gp5rv" Jul 9 10:12:48.320164 kubelet[2658]: E0709 10:12:48.320136 2658 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f54e855fe78e8a56e0a5d118ea3617f135f545fb4614ca8e9ef9da187cc5f80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gp5rv" Jul 9 10:12:48.320222 kubelet[2658]: E0709 10:12:48.320197 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gp5rv_calico-system(80cd3875-f2cb-436b-b9bd-83f74aa24913)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gp5rv_calico-system(80cd3875-f2cb-436b-b9bd-83f74aa24913)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f54e855fe78e8a56e0a5d118ea3617f135f545fb4614ca8e9ef9da187cc5f80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gp5rv" podUID="80cd3875-f2cb-436b-b9bd-83f74aa24913" Jul 9 10:12:51.166945 kubelet[2658]: I0709 10:12:51.166903 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 10:12:52.374416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347667123.mount: Deactivated successfully. Jul 9 10:12:52.531577 containerd[1513]: time="2025-07-09T10:12:52.514515216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 9 10:12:52.531577 containerd[1513]: time="2025-07-09T10:12:52.523164483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:52.541882 containerd[1513]: time="2025-07-09T10:12:52.541847156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.294535425s" Jul 9 10:12:52.542107 containerd[1513]: time="2025-07-09T10:12:52.542014857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 9 10:12:52.542247 containerd[1513]: time="2025-07-09T10:12:52.542213963Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:52.546033 containerd[1513]: time="2025-07-09T10:12:52.545987046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:52.561422 containerd[1513]: time="2025-07-09T10:12:52.561381338Z" level=info msg="CreateContainer within sandbox \"f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 10:12:52.585739 containerd[1513]: time="2025-07-09T10:12:52.585638964Z" level=info msg="Container 75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:52.595554 containerd[1513]: time="2025-07-09T10:12:52.595498627Z" level=info msg="CreateContainer within sandbox \"f89918811db8e928cc8ea454d102918f8120a04070629e5cc6e30414203c284c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af\"" Jul 9 10:12:52.596240 containerd[1513]: time="2025-07-09T10:12:52.596211438Z" level=info msg="StartContainer for \"75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af\"" Jul 9 10:12:52.597784 containerd[1513]: time="2025-07-09T10:12:52.597754316Z" level=info msg="connecting to shim 75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af" address="unix:///run/containerd/s/909956e6611281aa6e4a3e36422b064daf2d63640777ac252d9a14e94c5ec484" protocol=ttrpc version=3 Jul 9 10:12:52.652873 systemd[1]: Started cri-containerd-75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af.scope - libcontainer container 75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af. Jul 9 10:12:52.689018 containerd[1513]: time="2025-07-09T10:12:52.688980559Z" level=info msg="StartContainer for \"75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af\" returns successfully" Jul 9 10:12:52.906522 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 10:12:52.906644 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 10:12:53.130641 kubelet[2658]: I0709 10:12:53.130597 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14eaede5-5e62-4c91-a43a-7a8023870df4-whisker-ca-bundle\") pod \"14eaede5-5e62-4c91-a43a-7a8023870df4\" (UID: \"14eaede5-5e62-4c91-a43a-7a8023870df4\") " Jul 9 10:12:53.130981 kubelet[2658]: I0709 10:12:53.130658 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md4vc\" (UniqueName: \"kubernetes.io/projected/14eaede5-5e62-4c91-a43a-7a8023870df4-kube-api-access-md4vc\") pod \"14eaede5-5e62-4c91-a43a-7a8023870df4\" (UID: \"14eaede5-5e62-4c91-a43a-7a8023870df4\") " Jul 9 10:12:53.130981 kubelet[2658]: I0709 10:12:53.130684 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14eaede5-5e62-4c91-a43a-7a8023870df4-whisker-backend-key-pair\") pod \"14eaede5-5e62-4c91-a43a-7a8023870df4\" (UID: \"14eaede5-5e62-4c91-a43a-7a8023870df4\") " Jul 9 10:12:53.147324 kubelet[2658]: I0709 10:12:53.147231 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14eaede5-5e62-4c91-a43a-7a8023870df4-kube-api-access-md4vc" (OuterVolumeSpecName: "kube-api-access-md4vc") pod "14eaede5-5e62-4c91-a43a-7a8023870df4" (UID: "14eaede5-5e62-4c91-a43a-7a8023870df4"). InnerVolumeSpecName "kube-api-access-md4vc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 10:12:53.148244 kubelet[2658]: I0709 10:12:53.148218 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14eaede5-5e62-4c91-a43a-7a8023870df4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "14eaede5-5e62-4c91-a43a-7a8023870df4" (UID: "14eaede5-5e62-4c91-a43a-7a8023870df4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 10:12:53.148649 kubelet[2658]: I0709 10:12:53.148617 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14eaede5-5e62-4c91-a43a-7a8023870df4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "14eaede5-5e62-4c91-a43a-7a8023870df4" (UID: "14eaede5-5e62-4c91-a43a-7a8023870df4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 10:12:53.231845 kubelet[2658]: I0709 10:12:53.231804 2658 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14eaede5-5e62-4c91-a43a-7a8023870df4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 9 10:12:53.231845 kubelet[2658]: I0709 10:12:53.231838 2658 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-md4vc\" (UniqueName: \"kubernetes.io/projected/14eaede5-5e62-4c91-a43a-7a8023870df4-kube-api-access-md4vc\") on node \"localhost\" DevicePath \"\"" Jul 9 10:12:53.231845 kubelet[2658]: I0709 10:12:53.231847 2658 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14eaede5-5e62-4c91-a43a-7a8023870df4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 9 10:12:53.266272 systemd[1]: Removed slice kubepods-besteffort-pod14eaede5_5e62_4c91_a43a_7a8023870df4.slice - libcontainer container kubepods-besteffort-pod14eaede5_5e62_4c91_a43a_7a8023870df4.slice. Jul 9 10:12:53.284173 kubelet[2658]: I0709 10:12:53.284095 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2qr59" podStartSLOduration=2.159583141 podStartE2EDuration="14.284078079s" podCreationTimestamp="2025-07-09 10:12:39 +0000 UTC" firstStartedPulling="2025-07-09 10:12:40.421854555 +0000 UTC m=+18.394300419" lastFinishedPulling="2025-07-09 10:12:52.546349533 +0000 UTC m=+30.518795357" observedRunningTime="2025-07-09 10:12:53.283140843 +0000 UTC m=+31.255586707" watchObservedRunningTime="2025-07-09 10:12:53.284078079 +0000 UTC m=+31.256523943" Jul 9 10:12:53.330080 systemd[1]: Created slice kubepods-besteffort-pod2d62aedc_5e27_417e_ab34_959aa4bf8c34.slice - libcontainer container kubepods-besteffort-pod2d62aedc_5e27_417e_ab34_959aa4bf8c34.slice. Jul 9 10:12:53.375291 systemd[1]: var-lib-kubelet-pods-14eaede5\x2d5e62\x2d4c91\x2da43a\x2d7a8023870df4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmd4vc.mount: Deactivated successfully. Jul 9 10:12:53.375389 systemd[1]: var-lib-kubelet-pods-14eaede5\x2d5e62\x2d4c91\x2da43a\x2d7a8023870df4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 9 10:12:53.438475 kubelet[2658]: I0709 10:12:53.438338 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl5hk\" (UniqueName: \"kubernetes.io/projected/2d62aedc-5e27-417e-ab34-959aa4bf8c34-kube-api-access-sl5hk\") pod \"whisker-6c69cddcd6-2b44j\" (UID: \"2d62aedc-5e27-417e-ab34-959aa4bf8c34\") " pod="calico-system/whisker-6c69cddcd6-2b44j" Jul 9 10:12:53.438475 kubelet[2658]: I0709 10:12:53.438402 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2d62aedc-5e27-417e-ab34-959aa4bf8c34-whisker-backend-key-pair\") pod \"whisker-6c69cddcd6-2b44j\" (UID: \"2d62aedc-5e27-417e-ab34-959aa4bf8c34\") " pod="calico-system/whisker-6c69cddcd6-2b44j" Jul 9 10:12:53.438475 kubelet[2658]: I0709 10:12:53.438434 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d62aedc-5e27-417e-ab34-959aa4bf8c34-whisker-ca-bundle\") pod \"whisker-6c69cddcd6-2b44j\" (UID: \"2d62aedc-5e27-417e-ab34-959aa4bf8c34\") " pod="calico-system/whisker-6c69cddcd6-2b44j" Jul 9 10:12:53.635829 containerd[1513]: time="2025-07-09T10:12:53.635784115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c69cddcd6-2b44j,Uid:2d62aedc-5e27-417e-ab34-959aa4bf8c34,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:53.887066 systemd-networkd[1439]: calia4355e51d25: Link UP Jul 9 10:12:53.887356 systemd-networkd[1439]: calia4355e51d25: Gained carrier Jul 9 10:12:53.901180 containerd[1513]: 2025-07-09 10:12:53.656 [INFO][3798] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 10:12:53.901180 containerd[1513]: 2025-07-09 10:12:53.696 [INFO][3798] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c69cddcd6--2b44j-eth0 whisker-6c69cddcd6- calico-system 2d62aedc-5e27-417e-ab34-959aa4bf8c34 873 0 2025-07-09 10:12:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c69cddcd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c69cddcd6-2b44j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia4355e51d25 [] [] }} ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Namespace="calico-system" Pod="whisker-6c69cddcd6-2b44j" WorkloadEndpoint="localhost-k8s-whisker--6c69cddcd6--2b44j-" Jul 9 10:12:53.901180 containerd[1513]: 2025-07-09 10:12:53.697 [INFO][3798] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Namespace="calico-system" Pod="whisker-6c69cddcd6-2b44j" WorkloadEndpoint="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" Jul 9 10:12:53.901180 containerd[1513]: 2025-07-09 10:12:53.830 [INFO][3813] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" HandleID="k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Workload="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.830 [INFO][3813] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" HandleID="k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Workload="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000134c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c69cddcd6-2b44j", "timestamp":"2025-07-09 10:12:53.830411142 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.830 [INFO][3813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.830 [INFO][3813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.831 [INFO][3813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.845 [INFO][3813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" host="localhost" Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.856 [INFO][3813] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.862 [INFO][3813] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.864 [INFO][3813] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.866 [INFO][3813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 10:12:53.901481 containerd[1513]: 2025-07-09 10:12:53.866 [INFO][3813] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" host="localhost" Jul 9 10:12:53.901865 containerd[1513]: 2025-07-09 10:12:53.868 [INFO][3813] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e Jul 9 10:12:53.901865 containerd[1513]: 2025-07-09 10:12:53.872 [INFO][3813] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" host="localhost" Jul 9 10:12:53.901865 containerd[1513]: 2025-07-09 10:12:53.877 [INFO][3813] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" host="localhost" Jul 9 10:12:53.901865 containerd[1513]: 2025-07-09 10:12:53.877 [INFO][3813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" host="localhost" Jul 9 10:12:53.901865 containerd[1513]: 2025-07-09 10:12:53.877 [INFO][3813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 10:12:53.901865 containerd[1513]: 2025-07-09 10:12:53.877 [INFO][3813] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" HandleID="k8s-pod-network.dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Workload="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" Jul 9 10:12:53.901984 containerd[1513]: 2025-07-09 10:12:53.880 [INFO][3798] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Namespace="calico-system" Pod="whisker-6c69cddcd6-2b44j" WorkloadEndpoint="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c69cddcd6--2b44j-eth0", GenerateName:"whisker-6c69cddcd6-", Namespace:"calico-system", SelfLink:"", UID:"2d62aedc-5e27-417e-ab34-959aa4bf8c34", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c69cddcd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c69cddcd6-2b44j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4355e51d25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:12:53.901984 containerd[1513]: 2025-07-09 10:12:53.880 [INFO][3798] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Namespace="calico-system" Pod="whisker-6c69cddcd6-2b44j" WorkloadEndpoint="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" Jul 9 10:12:53.902059 containerd[1513]: 2025-07-09 10:12:53.880 [INFO][3798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4355e51d25 ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Namespace="calico-system" Pod="whisker-6c69cddcd6-2b44j" WorkloadEndpoint="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" Jul 9 10:12:53.902059 containerd[1513]: 2025-07-09 10:12:53.889 [INFO][3798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Namespace="calico-system" Pod="whisker-6c69cddcd6-2b44j" WorkloadEndpoint="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" Jul 9 10:12:53.902099 containerd[1513]: 2025-07-09 10:12:53.890 [INFO][3798] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Namespace="calico-system" Pod="whisker-6c69cddcd6-2b44j" WorkloadEndpoint="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c69cddcd6--2b44j-eth0", GenerateName:"whisker-6c69cddcd6-", Namespace:"calico-system", SelfLink:"", UID:"2d62aedc-5e27-417e-ab34-959aa4bf8c34", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c69cddcd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e", Pod:"whisker-6c69cddcd6-2b44j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4355e51d25", MAC:"be:cb:0b:6f:ee:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:12:53.902145 containerd[1513]: 2025-07-09 10:12:53.898 [INFO][3798] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" Namespace="calico-system" Pod="whisker-6c69cddcd6-2b44j" WorkloadEndpoint="localhost-k8s-whisker--6c69cddcd6--2b44j-eth0" Jul 9 10:12:53.961857 containerd[1513]: time="2025-07-09T10:12:53.961812004Z" level=info msg="connecting to shim dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e" address="unix:///run/containerd/s/e14a46d7a53a5ed9aa8bdd49e547d0fdf9bcef536ef132c229ea86f15ed2b171" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:53.991923 systemd[1]: Started cri-containerd-dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e.scope - libcontainer container dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e. Jul 9 10:12:54.004293 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:12:54.034795 containerd[1513]: time="2025-07-09T10:12:54.034746761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c69cddcd6-2b44j,Uid:2d62aedc-5e27-417e-ab34-959aa4bf8c34,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e\"" Jul 9 10:12:54.037131 containerd[1513]: time="2025-07-09T10:12:54.036894179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 9 10:12:54.114166 kubelet[2658]: I0709 10:12:54.114122 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14eaede5-5e62-4c91-a43a-7a8023870df4" path="/var/lib/kubelet/pods/14eaede5-5e62-4c91-a43a-7a8023870df4/volumes" Jul 9 10:12:54.258945 kubelet[2658]: I0709 10:12:54.258844 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 10:12:54.673132 systemd-networkd[1439]: vxlan.calico: Link UP Jul 9 10:12:54.673143 systemd-networkd[1439]: vxlan.calico: Gained carrier Jul 9 10:12:55.329723 containerd[1513]: time="2025-07-09T10:12:55.329664327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:55.330149 containerd[1513]: time="2025-07-09T10:12:55.330098378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 9 10:12:55.331089 containerd[1513]: time="2025-07-09T10:12:55.331061250Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:55.333144 containerd[1513]: time="2025-07-09T10:12:55.333101927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:55.334038 containerd[1513]: time="2025-07-09T10:12:55.334005833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.297073129s" Jul 9 10:12:55.334070 containerd[1513]: time="2025-07-09T10:12:55.334037676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 9 10:12:55.337307 containerd[1513]: time="2025-07-09T10:12:55.337274813Z" level=info msg="CreateContainer within sandbox \"dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 9 10:12:55.346013 containerd[1513]: time="2025-07-09T10:12:55.344822332Z" level=info msg="Container 8474e0130a3696dde9c499cc93fe5bec95e03951c59915a7a38f7378b8b783fc: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:55.355273 containerd[1513]: time="2025-07-09T10:12:55.355175858Z" level=info msg="CreateContainer within sandbox \"dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8474e0130a3696dde9c499cc93fe5bec95e03951c59915a7a38f7378b8b783fc\"" Jul 9 10:12:55.355906 containerd[1513]: time="2025-07-09T10:12:55.355840895Z" level=info msg="StartContainer for \"8474e0130a3696dde9c499cc93fe5bec95e03951c59915a7a38f7378b8b783fc\"" Jul 9 10:12:55.357066 containerd[1513]: time="2025-07-09T10:12:55.357032394Z" level=info msg="connecting to shim 8474e0130a3696dde9c499cc93fe5bec95e03951c59915a7a38f7378b8b783fc" address="unix:///run/containerd/s/e14a46d7a53a5ed9aa8bdd49e547d0fdf9bcef536ef132c229ea86f15ed2b171" protocol=ttrpc version=3 Jul 9 10:12:55.383893 systemd-networkd[1439]: calia4355e51d25: Gained IPv6LL Jul 9 10:12:55.385116 systemd[1]: Started cri-containerd-8474e0130a3696dde9c499cc93fe5bec95e03951c59915a7a38f7378b8b783fc.scope - libcontainer container 8474e0130a3696dde9c499cc93fe5bec95e03951c59915a7a38f7378b8b783fc. Jul 9 10:12:55.419042 containerd[1513]: time="2025-07-09T10:12:55.419006050Z" level=info msg="StartContainer for \"8474e0130a3696dde9c499cc93fe5bec95e03951c59915a7a38f7378b8b783fc\" returns successfully" Jul 9 10:12:55.420604 containerd[1513]: time="2025-07-09T10:12:55.420380050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 9 10:12:56.599869 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Jul 9 10:12:56.851433 kubelet[2658]: I0709 10:12:56.851037 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 10:12:57.006343 containerd[1513]: time="2025-07-09T10:12:57.006271109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af\" id:\"a334829e26cddfdda2244878f833142fa45ec5053f1c66896acda694978aca34\" pid:4126 exited_at:{seconds:1752055977 nanos:5974037}" Jul 9 10:12:57.105061 containerd[1513]: time="2025-07-09T10:12:57.104820638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af\" id:\"37f7899e604087b0a9033a7c2bf147bb4e725eb39966afb78840cb73efe66f9a\" pid:4154 exited_at:{seconds:1752055977 nanos:104439237}" Jul 9 10:12:57.248649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498072411.mount: Deactivated successfully. Jul 9 10:12:57.262008 containerd[1513]: time="2025-07-09T10:12:57.261966810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:57.262434 containerd[1513]: time="2025-07-09T10:12:57.262390136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 9 10:12:57.263286 containerd[1513]: time="2025-07-09T10:12:57.263243830Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:57.267402 containerd[1513]: time="2025-07-09T10:12:57.266661043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.846247589s" Jul 9 10:12:57.267402 containerd[1513]: time="2025-07-09T10:12:57.266696087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 9 10:12:57.267402 containerd[1513]: time="2025-07-09T10:12:57.266945554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:57.270629 containerd[1513]: time="2025-07-09T10:12:57.270601554Z" level=info msg="CreateContainer within sandbox \"dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 9 10:12:57.279745 containerd[1513]: time="2025-07-09T10:12:57.279685546Z" level=info msg="Container ecec65f9c27890ad9a0e4e584a67ed6573e82ce58ebaec1bb5f2ff514cf119ab: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:57.286209 containerd[1513]: time="2025-07-09T10:12:57.286173855Z" level=info msg="CreateContainer within sandbox \"dcfd81189276504ce7a4bc76d08b5a81a237903d9e1ab5c01533e15a50c0333e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ecec65f9c27890ad9a0e4e584a67ed6573e82ce58ebaec1bb5f2ff514cf119ab\"" Jul 9 10:12:57.286757 containerd[1513]: time="2025-07-09T10:12:57.286679470Z" level=info msg="StartContainer for \"ecec65f9c27890ad9a0e4e584a67ed6573e82ce58ebaec1bb5f2ff514cf119ab\"" Jul 9 10:12:57.287963 containerd[1513]: time="2025-07-09T10:12:57.287927047Z" level=info msg="connecting to shim ecec65f9c27890ad9a0e4e584a67ed6573e82ce58ebaec1bb5f2ff514cf119ab" address="unix:///run/containerd/s/e14a46d7a53a5ed9aa8bdd49e547d0fdf9bcef536ef132c229ea86f15ed2b171" protocol=ttrpc version=3 Jul 9 10:12:57.313864 systemd[1]: Started cri-containerd-ecec65f9c27890ad9a0e4e584a67ed6573e82ce58ebaec1bb5f2ff514cf119ab.scope - libcontainer container ecec65f9c27890ad9a0e4e584a67ed6573e82ce58ebaec1bb5f2ff514cf119ab. Jul 9 10:12:57.348660 containerd[1513]: time="2025-07-09T10:12:57.348613878Z" level=info msg="StartContainer for \"ecec65f9c27890ad9a0e4e584a67ed6573e82ce58ebaec1bb5f2ff514cf119ab\" returns successfully" Jul 9 10:12:58.284036 kubelet[2658]: I0709 10:12:58.282673 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c69cddcd6-2b44j" podStartSLOduration=2.051015505 podStartE2EDuration="5.282654501s" podCreationTimestamp="2025-07-09 10:12:53 +0000 UTC" firstStartedPulling="2025-07-09 10:12:54.036168892 +0000 UTC m=+32.008614756" lastFinishedPulling="2025-07-09 10:12:57.267807888 +0000 UTC m=+35.240253752" observedRunningTime="2025-07-09 10:12:58.281358004 +0000 UTC m=+36.253803908" watchObservedRunningTime="2025-07-09 10:12:58.282654501 +0000 UTC m=+36.255100365" Jul 9 10:12:59.112147 containerd[1513]: time="2025-07-09T10:12:59.112102058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ffb5d8977-4vhmw,Uid:a3c122d0-f6de-49bd-9842-5042fcddecbc,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:59.112481 containerd[1513]: time="2025-07-09T10:12:59.112158183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ftcwk,Uid:8e450753-85f6-41de-ba56-37afb800c8c3,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:59.112481 containerd[1513]: time="2025-07-09T10:12:59.112103978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp5rv,Uid:80cd3875-f2cb-436b-b9bd-83f74aa24913,Namespace:calico-system,Attempt:0,}" Jul 9 10:12:59.253759 systemd-networkd[1439]: calidb618f1cb94: Link UP Jul 9 10:12:59.256370 systemd-networkd[1439]: calidb618f1cb94: Gained carrier Jul 9 10:12:59.272327 containerd[1513]: 2025-07-09 10:12:59.163 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0 coredns-674b8bbfcf- kube-system 8e450753-85f6-41de-ba56-37afb800c8c3 800 0 2025-07-09 10:12:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ftcwk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb618f1cb94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Namespace="kube-system" Pod="coredns-674b8bbfcf-ftcwk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ftcwk-" Jul 9 10:12:59.272327 containerd[1513]: 2025-07-09 10:12:59.163 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Namespace="kube-system" Pod="coredns-674b8bbfcf-ftcwk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" Jul 9 10:12:59.272327 containerd[1513]: 2025-07-09 10:12:59.205 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" HandleID="k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Workload="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.205 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" HandleID="k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Workload="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ftcwk", "timestamp":"2025-07-09 10:12:59.205304776 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.205 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.205 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.205 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.215 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" host="localhost" Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.227 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.232 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.234 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.236 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 10:12:59.272566 containerd[1513]: 2025-07-09 10:12:59.236 [INFO][4248] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" host="localhost" Jul 9 10:12:59.272823 containerd[1513]: 2025-07-09 10:12:59.237 [INFO][4248] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178 Jul 9 10:12:59.272823 containerd[1513]: 2025-07-09 10:12:59.241 [INFO][4248] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" host="localhost" Jul 9 10:12:59.272823 containerd[1513]: 2025-07-09 10:12:59.246 [INFO][4248] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" host="localhost" Jul 9 10:12:59.272823 containerd[1513]: 2025-07-09 10:12:59.246 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" host="localhost" Jul 9 10:12:59.272823 containerd[1513]: 2025-07-09 10:12:59.246 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 10:12:59.272823 containerd[1513]: 2025-07-09 10:12:59.246 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" HandleID="k8s-pod-network.dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Workload="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" Jul 9 10:12:59.272960 containerd[1513]: 2025-07-09 10:12:59.251 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Namespace="kube-system" Pod="coredns-674b8bbfcf-ftcwk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8e450753-85f6-41de-ba56-37afb800c8c3", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ftcwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb618f1cb94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:12:59.273030 containerd[1513]: 2025-07-09 10:12:59.251 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Namespace="kube-system" Pod="coredns-674b8bbfcf-ftcwk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" Jul 9 10:12:59.273030 containerd[1513]: 2025-07-09 10:12:59.251 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb618f1cb94 ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Namespace="kube-system" Pod="coredns-674b8bbfcf-ftcwk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" Jul 9 10:12:59.273030 containerd[1513]: 2025-07-09 10:12:59.258 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Namespace="kube-system" Pod="coredns-674b8bbfcf-ftcwk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" Jul 9 10:12:59.273108 containerd[1513]: 2025-07-09 10:12:59.258 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Namespace="kube-system" Pod="coredns-674b8bbfcf-ftcwk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8e450753-85f6-41de-ba56-37afb800c8c3", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178", Pod:"coredns-674b8bbfcf-ftcwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb618f1cb94", MAC:"ce:cd:eb:04:4d:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:12:59.273108 containerd[1513]: 2025-07-09 10:12:59.268 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" Namespace="kube-system" Pod="coredns-674b8bbfcf-ftcwk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ftcwk-eth0" Jul 9 10:12:59.341416 containerd[1513]: time="2025-07-09T10:12:59.341370769Z" level=info msg="connecting to shim dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178" address="unix:///run/containerd/s/97b6c7864bac423789af276d2e5e2cf1d662b33cb96117776003a49d39c9db56" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:59.371869 systemd[1]: Started cri-containerd-dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178.scope - libcontainer container dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178. Jul 9 10:12:59.375211 systemd-networkd[1439]: cali3f13a59a042: Link UP Jul 9 10:12:59.376174 systemd-networkd[1439]: cali3f13a59a042: Gained carrier Jul 9 10:12:59.394028 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.166 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0 calico-kube-controllers-5ffb5d8977- calico-system a3c122d0-f6de-49bd-9842-5042fcddecbc 803 0 2025-07-09 10:12:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5ffb5d8977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5ffb5d8977-4vhmw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3f13a59a042 [] [] }} ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Namespace="calico-system" Pod="calico-kube-controllers-5ffb5d8977-4vhmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.166 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Namespace="calico-system" Pod="calico-kube-controllers-5ffb5d8977-4vhmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.210 [INFO][4255] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" HandleID="k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Workload="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.211 [INFO][4255] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" HandleID="k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Workload="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005aab00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5ffb5d8977-4vhmw", "timestamp":"2025-07-09 10:12:59.210847904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.211 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.246 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.246 [INFO][4255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.317 [INFO][4255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.328 [INFO][4255] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.337 [INFO][4255] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.342 [INFO][4255] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.346 [INFO][4255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.346 [INFO][4255] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.349 [INFO][4255] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68 Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.357 [INFO][4255] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.365 [INFO][4255] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.365 [INFO][4255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" host="localhost" Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.366 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 10:12:59.395429 containerd[1513]: 2025-07-09 10:12:59.366 [INFO][4255] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" HandleID="k8s-pod-network.6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Workload="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" Jul 9 10:12:59.396068 containerd[1513]: 2025-07-09 10:12:59.369 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Namespace="calico-system" Pod="calico-kube-controllers-5ffb5d8977-4vhmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0", GenerateName:"calico-kube-controllers-5ffb5d8977-", Namespace:"calico-system", SelfLink:"", UID:"a3c122d0-f6de-49bd-9842-5042fcddecbc", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ffb5d8977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5ffb5d8977-4vhmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f13a59a042", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:12:59.396068 containerd[1513]: 2025-07-09 10:12:59.370 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Namespace="calico-system" Pod="calico-kube-controllers-5ffb5d8977-4vhmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" Jul 9 10:12:59.396068 containerd[1513]: 2025-07-09 10:12:59.370 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f13a59a042 ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Namespace="calico-system" Pod="calico-kube-controllers-5ffb5d8977-4vhmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" Jul 9 10:12:59.396068 containerd[1513]: 2025-07-09 10:12:59.377 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Namespace="calico-system" Pod="calico-kube-controllers-5ffb5d8977-4vhmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" Jul 9 10:12:59.396068 containerd[1513]: 2025-07-09 10:12:59.379 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Namespace="calico-system" Pod="calico-kube-controllers-5ffb5d8977-4vhmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0", GenerateName:"calico-kube-controllers-5ffb5d8977-", Namespace:"calico-system", SelfLink:"", UID:"a3c122d0-f6de-49bd-9842-5042fcddecbc", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ffb5d8977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68", Pod:"calico-kube-controllers-5ffb5d8977-4vhmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f13a59a042", MAC:"42:52:94:59:08:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:12:59.396068 containerd[1513]: 2025-07-09 10:12:59.390 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" Namespace="calico-system" Pod="calico-kube-controllers-5ffb5d8977-4vhmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ffb5d8977--4vhmw-eth0" Jul 9 10:12:59.421317 containerd[1513]: time="2025-07-09T10:12:59.420906086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ftcwk,Uid:8e450753-85f6-41de-ba56-37afb800c8c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178\"" Jul 9 10:12:59.426979 containerd[1513]: time="2025-07-09T10:12:59.426904501Z" level=info msg="connecting to shim 6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68" address="unix:///run/containerd/s/1d667215c025c0b0dbbb0b225aa981b0b00bed62eddbf0c5e1d24ab07c61cf6b" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:59.427814 containerd[1513]: time="2025-07-09T10:12:59.427113042Z" level=info msg="CreateContainer within sandbox \"dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 10:12:59.444898 containerd[1513]: time="2025-07-09T10:12:59.444796096Z" level=info msg="Container 52de188d0091dad425d8c1a2c9c91c0f1d7ca5ca4aced7ef32363136342f80a7: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:59.452083 containerd[1513]: time="2025-07-09T10:12:59.452024237Z" level=info msg="CreateContainer within sandbox \"dffefb1e8c4d35a1bf39fe825453b86e442def22c1c52e7c19c522227ccfa178\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52de188d0091dad425d8c1a2c9c91c0f1d7ca5ca4aced7ef32363136342f80a7\"" Jul 9 10:12:59.452853 containerd[1513]: time="2025-07-09T10:12:59.452630019Z" level=info msg="StartContainer for \"52de188d0091dad425d8c1a2c9c91c0f1d7ca5ca4aced7ef32363136342f80a7\"" Jul 9 10:12:59.454113 containerd[1513]: time="2025-07-09T10:12:59.454082408Z" level=info msg="connecting to shim 52de188d0091dad425d8c1a2c9c91c0f1d7ca5ca4aced7ef32363136342f80a7" address="unix:///run/containerd/s/97b6c7864bac423789af276d2e5e2cf1d662b33cb96117776003a49d39c9db56" protocol=ttrpc version=3 Jul 9 10:12:59.468572 systemd-networkd[1439]: cali44466b0471a: Link UP Jul 9 10:12:59.469170 systemd-networkd[1439]: cali44466b0471a: Gained carrier Jul 9 10:12:59.476943 systemd[1]: Started cri-containerd-52de188d0091dad425d8c1a2c9c91c0f1d7ca5ca4aced7ef32363136342f80a7.scope - libcontainer container 52de188d0091dad425d8c1a2c9c91c0f1d7ca5ca4aced7ef32363136342f80a7. Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.196 [INFO][4234] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gp5rv-eth0 csi-node-driver- calico-system 80cd3875-f2cb-436b-b9bd-83f74aa24913 699 0 2025-07-09 10:12:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gp5rv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali44466b0471a [] [] }} ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Namespace="calico-system" Pod="csi-node-driver-gp5rv" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp5rv-" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.196 [INFO][4234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Namespace="calico-system" Pod="csi-node-driver-gp5rv" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp5rv-eth0" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.238 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" HandleID="k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Workload="localhost-k8s-csi--node--driver--gp5rv-eth0" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.238 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" HandleID="k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Workload="localhost-k8s-csi--node--driver--gp5rv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gp5rv", "timestamp":"2025-07-09 10:12:59.238150944 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.238 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.366 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.366 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.420 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.428 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.438 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.443 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.447 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.447 [INFO][4266] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.449 [INFO][4266] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.454 [INFO][4266] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.462 [INFO][4266] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.462 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" host="localhost" Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.463 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 10:12:59.494491 containerd[1513]: 2025-07-09 10:12:59.463 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" HandleID="k8s-pod-network.a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Workload="localhost-k8s-csi--node--driver--gp5rv-eth0" Jul 9 10:12:59.495161 containerd[1513]: 2025-07-09 10:12:59.465 [INFO][4234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Namespace="calico-system" Pod="csi-node-driver-gp5rv" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp5rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gp5rv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80cd3875-f2cb-436b-b9bd-83f74aa24913", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gp5rv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44466b0471a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:12:59.495161 containerd[1513]: 2025-07-09 10:12:59.465 [INFO][4234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Namespace="calico-system" Pod="csi-node-driver-gp5rv" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp5rv-eth0" Jul 9 10:12:59.495161 containerd[1513]: 2025-07-09 10:12:59.465 [INFO][4234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44466b0471a ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Namespace="calico-system" Pod="csi-node-driver-gp5rv" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp5rv-eth0" Jul 9 10:12:59.495161 containerd[1513]: 2025-07-09 10:12:59.469 [INFO][4234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Namespace="calico-system" Pod="csi-node-driver-gp5rv" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp5rv-eth0" Jul 9 10:12:59.495161 containerd[1513]: 2025-07-09 10:12:59.469 [INFO][4234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Namespace="calico-system" Pod="csi-node-driver-gp5rv" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp5rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gp5rv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80cd3875-f2cb-436b-b9bd-83f74aa24913", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a", Pod:"csi-node-driver-gp5rv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44466b0471a", MAC:"0a:37:db:af:3a:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:12:59.495161 containerd[1513]: 2025-07-09 10:12:59.487 [INFO][4234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" Namespace="calico-system" Pod="csi-node-driver-gp5rv" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp5rv-eth0" Jul 9 10:12:59.494923 systemd[1]: Started cri-containerd-6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68.scope - libcontainer container 6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68. Jul 9 10:12:59.522942 containerd[1513]: time="2025-07-09T10:12:59.522571192Z" level=info msg="connecting to shim a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a" address="unix:///run/containerd/s/ece4e9fd0d85b7d5704b9aa3088a4e8e4dff2a1688cf1e98beb8d57e1797192d" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:59.530562 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:12:59.559391 containerd[1513]: time="2025-07-09T10:12:59.559285077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ffb5d8977-4vhmw,Uid:a3c122d0-f6de-49bd-9842-5042fcddecbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68\"" Jul 9 10:12:59.560689 containerd[1513]: time="2025-07-09T10:12:59.560662778Z" level=info msg="StartContainer for \"52de188d0091dad425d8c1a2c9c91c0f1d7ca5ca4aced7ef32363136342f80a7\" returns successfully" Jul 9 10:12:59.564594 containerd[1513]: time="2025-07-09T10:12:59.564571859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 9 10:12:59.564879 systemd[1]: Started cri-containerd-a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a.scope - libcontainer container a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a. Jul 9 10:12:59.579310 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:12:59.601104 containerd[1513]: time="2025-07-09T10:12:59.599340184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp5rv,Uid:80cd3875-f2cb-436b-b9bd-83f74aa24913,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a\"" Jul 9 10:13:00.214058 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:57390.service - OpenSSH per-connection server daemon (10.0.0.1:57390). Jul 9 10:13:00.282865 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 57390 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:00.284597 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:00.294758 kubelet[2658]: I0709 10:13:00.292010 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ftcwk" podStartSLOduration=33.291992443 podStartE2EDuration="33.291992443s" podCreationTimestamp="2025-07-09 10:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:13:00.29146515 +0000 UTC m=+38.263911014" watchObservedRunningTime="2025-07-09 10:13:00.291992443 +0000 UTC m=+38.264438307" Jul 9 10:13:00.295022 systemd-logind[1485]: New session 8 of user core. Jul 9 10:13:00.298998 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 10:13:00.585401 sshd[4492]: Connection closed by 10.0.0.1 port 57390 Jul 9 10:13:00.585951 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:00.590106 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:57390.service: Deactivated successfully. Jul 9 10:13:00.592351 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 10:13:00.593251 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Jul 9 10:13:00.594801 systemd-logind[1485]: Removed session 8. Jul 9 10:13:00.887852 systemd-networkd[1439]: cali44466b0471a: Gained IPv6LL Jul 9 10:13:01.112277 containerd[1513]: time="2025-07-09T10:13:01.112221543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2c74h,Uid:74848606-f18c-4818-9f61-312c9532eeeb,Namespace:kube-system,Attempt:0,}" Jul 9 10:13:01.143855 systemd-networkd[1439]: cali3f13a59a042: Gained IPv6LL Jul 9 10:13:01.238027 systemd-networkd[1439]: calie0dd9fe3b12: Link UP Jul 9 10:13:01.238506 systemd-networkd[1439]: calie0dd9fe3b12: Gained carrier Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.160 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2c74h-eth0 coredns-674b8bbfcf- kube-system 74848606-f18c-4818-9f61-312c9532eeeb 801 0 2025-07-09 10:12:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2c74h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0dd9fe3b12 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Namespace="kube-system" Pod="coredns-674b8bbfcf-2c74h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2c74h-" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.160 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Namespace="kube-system" Pod="coredns-674b8bbfcf-2c74h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.189 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" HandleID="k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Workload="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.190 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" HandleID="k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Workload="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2c74h", "timestamp":"2025-07-09 10:13:01.189975067 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.190 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.190 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.190 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.202 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.208 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.213 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.215 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.218 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.218 [INFO][4529] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.219 [INFO][4529] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076 Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.224 [INFO][4529] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.230 [INFO][4529] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.230 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" host="localhost" Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.230 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 10:13:01.255988 containerd[1513]: 2025-07-09 10:13:01.230 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" HandleID="k8s-pod-network.1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Workload="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" Jul 9 10:13:01.256825 containerd[1513]: 2025-07-09 10:13:01.235 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Namespace="kube-system" Pod="coredns-674b8bbfcf-2c74h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2c74h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"74848606-f18c-4818-9f61-312c9532eeeb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2c74h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0dd9fe3b12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:13:01.256825 containerd[1513]: 2025-07-09 10:13:01.235 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Namespace="kube-system" Pod="coredns-674b8bbfcf-2c74h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" Jul 9 10:13:01.256825 containerd[1513]: 2025-07-09 10:13:01.235 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0dd9fe3b12 ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Namespace="kube-system" Pod="coredns-674b8bbfcf-2c74h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" Jul 9 10:13:01.256825 containerd[1513]: 2025-07-09 10:13:01.238 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Namespace="kube-system" Pod="coredns-674b8bbfcf-2c74h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" Jul 9 10:13:01.256825 containerd[1513]: 2025-07-09 10:13:01.239 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Namespace="kube-system" Pod="coredns-674b8bbfcf-2c74h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2c74h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"74848606-f18c-4818-9f61-312c9532eeeb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076", Pod:"coredns-674b8bbfcf-2c74h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0dd9fe3b12", MAC:"d6:54:77:99:ed:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:13:01.256825 containerd[1513]: 2025-07-09 10:13:01.251 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" Namespace="kube-system" Pod="coredns-674b8bbfcf-2c74h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2c74h-eth0" Jul 9 10:13:01.271956 systemd-networkd[1439]: calidb618f1cb94: Gained IPv6LL Jul 9 10:13:01.286729 containerd[1513]: time="2025-07-09T10:13:01.286402307Z" level=info msg="connecting to shim 1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076" address="unix:///run/containerd/s/3bf9dd4c0a3212ad29f85d30a6b6dd32a7df104c79da9e0531ad6cf2bb4c4794" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:13:01.322904 systemd[1]: Started cri-containerd-1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076.scope - libcontainer container 1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076. Jul 9 10:13:01.341565 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:13:01.366580 containerd[1513]: time="2025-07-09T10:13:01.366520778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2c74h,Uid:74848606-f18c-4818-9f61-312c9532eeeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076\"" Jul 9 10:13:01.372416 containerd[1513]: time="2025-07-09T10:13:01.372380782Z" level=info msg="CreateContainer within sandbox \"1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 10:13:01.379745 containerd[1513]: time="2025-07-09T10:13:01.379669803Z" level=info msg="Container 7e0c9014d0108bf60936bcbc489fb2f0367c6af22779a40b4e74dee00d6076fe: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:01.384413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984048100.mount: Deactivated successfully. Jul 9 10:13:01.386542 containerd[1513]: time="2025-07-09T10:13:01.386504341Z" level=info msg="CreateContainer within sandbox \"1f465a249911f132b608aba8ef800fc69c12ba7ff276c40cadc53ecbdb27c076\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e0c9014d0108bf60936bcbc489fb2f0367c6af22779a40b4e74dee00d6076fe\"" Jul 9 10:13:01.387680 containerd[1513]: time="2025-07-09T10:13:01.387648171Z" level=info msg="StartContainer for \"7e0c9014d0108bf60936bcbc489fb2f0367c6af22779a40b4e74dee00d6076fe\"" Jul 9 10:13:01.388799 containerd[1513]: time="2025-07-09T10:13:01.388450168Z" level=info msg="connecting to shim 7e0c9014d0108bf60936bcbc489fb2f0367c6af22779a40b4e74dee00d6076fe" address="unix:///run/containerd/s/3bf9dd4c0a3212ad29f85d30a6b6dd32a7df104c79da9e0531ad6cf2bb4c4794" protocol=ttrpc version=3 Jul 9 10:13:01.407892 systemd[1]: Started cri-containerd-7e0c9014d0108bf60936bcbc489fb2f0367c6af22779a40b4e74dee00d6076fe.scope - libcontainer container 7e0c9014d0108bf60936bcbc489fb2f0367c6af22779a40b4e74dee00d6076fe. Jul 9 10:13:01.442552 containerd[1513]: time="2025-07-09T10:13:01.442510011Z" level=info msg="StartContainer for \"7e0c9014d0108bf60936bcbc489fb2f0367c6af22779a40b4e74dee00d6076fe\" returns successfully" Jul 9 10:13:01.580527 containerd[1513]: time="2025-07-09T10:13:01.580158419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:01.580752 containerd[1513]: time="2025-07-09T10:13:01.580728914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 9 10:13:01.581561 containerd[1513]: time="2025-07-09T10:13:01.581511109Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:01.583852 containerd[1513]: time="2025-07-09T10:13:01.583413652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:01.584247 containerd[1513]: time="2025-07-09T10:13:01.584214849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.01944293s" Jul 9 10:13:01.584305 containerd[1513]: time="2025-07-09T10:13:01.584250733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 9 10:13:01.586120 containerd[1513]: time="2025-07-09T10:13:01.585889010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 10:13:01.598144 containerd[1513]: time="2025-07-09T10:13:01.598106786Z" level=info msg="CreateContainer within sandbox \"6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 9 10:13:01.603768 containerd[1513]: time="2025-07-09T10:13:01.603722647Z" level=info msg="Container 2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:01.614450 containerd[1513]: time="2025-07-09T10:13:01.614324067Z" level=info msg="CreateContainer within sandbox \"6acafc2293d318582ea29db22498e484347ed02c78b3cc7f147f2d685ea32b68\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047\"" Jul 9 10:13:01.615160 containerd[1513]: time="2025-07-09T10:13:01.615117103Z" level=info msg="StartContainer for \"2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047\"" Jul 9 10:13:01.616468 containerd[1513]: time="2025-07-09T10:13:01.616436830Z" level=info msg="connecting to shim 2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047" address="unix:///run/containerd/s/1d667215c025c0b0dbbb0b225aa981b0b00bed62eddbf0c5e1d24ab07c61cf6b" protocol=ttrpc version=3 Jul 9 10:13:01.645937 systemd[1]: Started cri-containerd-2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047.scope - libcontainer container 2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047. Jul 9 10:13:01.682895 containerd[1513]: time="2025-07-09T10:13:01.682792016Z" level=info msg="StartContainer for \"2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047\" returns successfully" Jul 9 10:13:02.113864 containerd[1513]: time="2025-07-09T10:13:02.113817399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb5d897-zj5gr,Uid:6529ece6-b02f-4f4a-9995-1fcc8f528fce,Namespace:calico-apiserver,Attempt:0,}" Jul 9 10:13:02.114219 containerd[1513]: time="2025-07-09T10:13:02.113952092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb5d897-tkhzm,Uid:26ced58c-aa95-4329-9c3b-e15631430951,Namespace:calico-apiserver,Attempt:0,}" Jul 9 10:13:02.234763 systemd-networkd[1439]: calieb8eb13ab4b: Link UP Jul 9 10:13:02.235742 systemd-networkd[1439]: calieb8eb13ab4b: Gained carrier Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.156 [INFO][4677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0 calico-apiserver-57ccb5d897- calico-apiserver 6529ece6-b02f-4f4a-9995-1fcc8f528fce 795 0 2025-07-09 10:12:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57ccb5d897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57ccb5d897-zj5gr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb8eb13ab4b [] [] }} ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-zj5gr" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.157 [INFO][4677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-zj5gr" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.187 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" HandleID="k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Workload="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.187 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" HandleID="k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Workload="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57ccb5d897-zj5gr", "timestamp":"2025-07-09 10:13:02.187782816 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.188 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.188 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.188 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.200 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.206 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.212 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.214 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.216 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.216 [INFO][4704] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.217 [INFO][4704] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344 Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.221 [INFO][4704] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.226 [INFO][4704] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.226 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" host="localhost" Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.226 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 10:13:02.249244 containerd[1513]: 2025-07-09 10:13:02.227 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" HandleID="k8s-pod-network.aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Workload="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" Jul 9 10:13:02.250475 containerd[1513]: 2025-07-09 10:13:02.230 [INFO][4677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-zj5gr" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0", GenerateName:"calico-apiserver-57ccb5d897-", Namespace:"calico-apiserver", SelfLink:"", UID:"6529ece6-b02f-4f4a-9995-1fcc8f528fce", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57ccb5d897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57ccb5d897-zj5gr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb8eb13ab4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:13:02.250475 containerd[1513]: 2025-07-09 10:13:02.230 [INFO][4677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-zj5gr" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" Jul 9 10:13:02.250475 containerd[1513]: 2025-07-09 10:13:02.230 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb8eb13ab4b ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-zj5gr" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" Jul 9 10:13:02.250475 containerd[1513]: 2025-07-09 10:13:02.235 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-zj5gr" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" Jul 9 10:13:02.250475 containerd[1513]: 2025-07-09 10:13:02.236 [INFO][4677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-zj5gr" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0", GenerateName:"calico-apiserver-57ccb5d897-", Namespace:"calico-apiserver", SelfLink:"", UID:"6529ece6-b02f-4f4a-9995-1fcc8f528fce", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57ccb5d897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344", Pod:"calico-apiserver-57ccb5d897-zj5gr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb8eb13ab4b", MAC:"02:4e:31:a1:e7:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:13:02.250475 containerd[1513]: 2025-07-09 10:13:02.245 [INFO][4677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-zj5gr" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--zj5gr-eth0" Jul 9 10:13:02.305076 kubelet[2658]: I0709 10:13:02.305005 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2c74h" podStartSLOduration=35.304984143 podStartE2EDuration="35.304984143s" podCreationTimestamp="2025-07-09 10:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:13:02.303584132 +0000 UTC m=+40.276029996" watchObservedRunningTime="2025-07-09 10:13:02.304984143 +0000 UTC m=+40.277430007" Jul 9 10:13:02.331224 kubelet[2658]: I0709 10:13:02.331158 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5ffb5d8977-4vhmw" podStartSLOduration=20.309992206 podStartE2EDuration="22.331140782s" podCreationTimestamp="2025-07-09 10:12:40 +0000 UTC" firstStartedPulling="2025-07-09 10:12:59.564348436 +0000 UTC m=+37.536794300" lastFinishedPulling="2025-07-09 10:13:01.585497052 +0000 UTC m=+39.557942876" observedRunningTime="2025-07-09 10:13:02.330152369 +0000 UTC m=+40.302598233" watchObservedRunningTime="2025-07-09 10:13:02.331140782 +0000 UTC m=+40.303586606" Jul 9 10:13:02.347738 containerd[1513]: time="2025-07-09T10:13:02.346916812Z" level=info msg="connecting to shim aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344" address="unix:///run/containerd/s/23c7e2da750c69fa10c0967fb30b72351b6fd0b5bf90c143fe0205ffbe21a580" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:13:02.408770 systemd-networkd[1439]: calic1cc63ed48d: Link UP Jul 9 10:13:02.411590 systemd-networkd[1439]: calic1cc63ed48d: Gained carrier Jul 9 10:13:02.426019 systemd[1]: Started cri-containerd-aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344.scope - libcontainer container aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344. Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.159 [INFO][4689] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0 calico-apiserver-57ccb5d897- calico-apiserver 26ced58c-aa95-4329-9c3b-e15631430951 802 0 2025-07-09 10:12:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57ccb5d897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57ccb5d897-tkhzm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1cc63ed48d [] [] }} ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-tkhzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.159 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-tkhzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.190 [INFO][4707] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" HandleID="k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Workload="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.191 [INFO][4707] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" HandleID="k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Workload="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cd30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57ccb5d897-tkhzm", "timestamp":"2025-07-09 10:13:02.190634001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.191 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.226 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.227 [INFO][4707] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.303 [INFO][4707] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.320 [INFO][4707] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.337 [INFO][4707] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.341 [INFO][4707] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.349 [INFO][4707] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.349 [INFO][4707] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.352 [INFO][4707] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329 Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.374 [INFO][4707] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.392 [INFO][4707] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.392 [INFO][4707] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" host="localhost" Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.392 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 10:13:02.434067 containerd[1513]: 2025-07-09 10:13:02.392 [INFO][4707] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" HandleID="k8s-pod-network.60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Workload="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" Jul 9 10:13:02.434842 containerd[1513]: 2025-07-09 10:13:02.401 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-tkhzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0", GenerateName:"calico-apiserver-57ccb5d897-", Namespace:"calico-apiserver", SelfLink:"", UID:"26ced58c-aa95-4329-9c3b-e15631430951", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57ccb5d897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57ccb5d897-tkhzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1cc63ed48d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:13:02.434842 containerd[1513]: 2025-07-09 10:13:02.401 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-tkhzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" Jul 9 10:13:02.434842 containerd[1513]: 2025-07-09 10:13:02.402 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1cc63ed48d ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-tkhzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" Jul 9 10:13:02.434842 containerd[1513]: 2025-07-09 10:13:02.411 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-tkhzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" Jul 9 10:13:02.434842 containerd[1513]: 2025-07-09 10:13:02.411 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-tkhzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0", GenerateName:"calico-apiserver-57ccb5d897-", Namespace:"calico-apiserver", SelfLink:"", UID:"26ced58c-aa95-4329-9c3b-e15631430951", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57ccb5d897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329", Pod:"calico-apiserver-57ccb5d897-tkhzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1cc63ed48d", MAC:"de:66:76:80:1d:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:13:02.434842 containerd[1513]: 2025-07-09 10:13:02.431 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb5d897-tkhzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb5d897--tkhzm-eth0" Jul 9 10:13:02.455008 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:13:02.463090 containerd[1513]: time="2025-07-09T10:13:02.463009356Z" level=info msg="connecting to shim 60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329" address="unix:///run/containerd/s/24af0e7ccb24b0c5bcd015a950c8d2f7a83c42e1b024ee1464ef1ffe9dd7c082" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:13:02.498873 systemd[1]: Started cri-containerd-60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329.scope - libcontainer container 60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329. Jul 9 10:13:02.501290 containerd[1513]: time="2025-07-09T10:13:02.501249642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb5d897-zj5gr,Uid:6529ece6-b02f-4f4a-9995-1fcc8f528fce,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344\"" Jul 9 10:13:02.512826 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:13:02.533327 containerd[1513]: time="2025-07-09T10:13:02.533290389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb5d897-tkhzm,Uid:26ced58c-aa95-4329-9c3b-e15631430951,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329\"" Jul 9 10:13:02.551873 systemd-networkd[1439]: calie0dd9fe3b12: Gained IPv6LL Jul 9 10:13:02.814414 containerd[1513]: time="2025-07-09T10:13:02.814354314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:02.815711 containerd[1513]: time="2025-07-09T10:13:02.815669437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 9 10:13:02.816906 containerd[1513]: time="2025-07-09T10:13:02.816864748Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:02.819398 containerd[1513]: time="2025-07-09T10:13:02.819343099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:02.837808 containerd[1513]: time="2025-07-09T10:13:02.837748655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.251817841s" Jul 9 10:13:02.837808 containerd[1513]: time="2025-07-09T10:13:02.837795179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 9 10:13:02.838716 containerd[1513]: time="2025-07-09T10:13:02.838637418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 10:13:02.842594 containerd[1513]: time="2025-07-09T10:13:02.842555463Z" level=info msg="CreateContainer within sandbox \"a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 10:13:02.849930 containerd[1513]: time="2025-07-09T10:13:02.849844023Z" level=info msg="Container 744c7e7f1140ad7eaed8759b5b2b5954ffcaed98b98f7979d18a749797eebd3f: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:02.857018 containerd[1513]: time="2025-07-09T10:13:02.856977048Z" level=info msg="CreateContainer within sandbox \"a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"744c7e7f1140ad7eaed8759b5b2b5954ffcaed98b98f7979d18a749797eebd3f\"" Jul 9 10:13:02.858049 containerd[1513]: time="2025-07-09T10:13:02.857665992Z" level=info msg="StartContainer for \"744c7e7f1140ad7eaed8759b5b2b5954ffcaed98b98f7979d18a749797eebd3f\"" Jul 9 10:13:02.859520 containerd[1513]: time="2025-07-09T10:13:02.859492842Z" level=info msg="connecting to shim 744c7e7f1140ad7eaed8759b5b2b5954ffcaed98b98f7979d18a749797eebd3f" address="unix:///run/containerd/s/ece4e9fd0d85b7d5704b9aa3088a4e8e4dff2a1688cf1e98beb8d57e1797192d" protocol=ttrpc version=3 Jul 9 10:13:02.884892 systemd[1]: Started cri-containerd-744c7e7f1140ad7eaed8759b5b2b5954ffcaed98b98f7979d18a749797eebd3f.scope - libcontainer container 744c7e7f1140ad7eaed8759b5b2b5954ffcaed98b98f7979d18a749797eebd3f. Jul 9 10:13:02.916473 containerd[1513]: time="2025-07-09T10:13:02.916416910Z" level=info msg="StartContainer for \"744c7e7f1140ad7eaed8759b5b2b5954ffcaed98b98f7979d18a749797eebd3f\" returns successfully" Jul 9 10:13:03.112120 containerd[1513]: time="2025-07-09T10:13:03.112003021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nl7w8,Uid:5e10e32b-6714-4d36-bb91-3840e2971150,Namespace:calico-system,Attempt:0,}" Jul 9 10:13:03.225989 systemd-networkd[1439]: cali46a1d1435da: Link UP Jul 9 10:13:03.227045 systemd-networkd[1439]: cali46a1d1435da: Gained carrier Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.147 [INFO][4877] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0 goldmane-768f4c5c69- calico-system 5e10e32b-6714-4d36-bb91-3840e2971150 804 0 2025-07-09 10:12:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-nl7w8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali46a1d1435da [] [] }} ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Namespace="calico-system" Pod="goldmane-768f4c5c69-nl7w8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nl7w8-" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.147 [INFO][4877] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Namespace="calico-system" Pod="goldmane-768f4c5c69-nl7w8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.179 [INFO][4891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" HandleID="k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Workload="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.179 [INFO][4891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" HandleID="k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Workload="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-nl7w8", "timestamp":"2025-07-09 10:13:03.179330622 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.179 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.179 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.179 [INFO][4891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.190 [INFO][4891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.195 [INFO][4891] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.200 [INFO][4891] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.202 [INFO][4891] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.204 [INFO][4891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.204 [INFO][4891] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.206 [INFO][4891] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61 Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.210 [INFO][4891] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.217 [INFO][4891] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.217 [INFO][4891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" host="localhost" Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.217 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 10:13:03.242463 containerd[1513]: 2025-07-09 10:13:03.217 [INFO][4891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" HandleID="k8s-pod-network.a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Workload="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" Jul 9 10:13:03.243193 containerd[1513]: 2025-07-09 10:13:03.220 [INFO][4877] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Namespace="calico-system" Pod="goldmane-768f4c5c69-nl7w8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5e10e32b-6714-4d36-bb91-3840e2971150", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-nl7w8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46a1d1435da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:13:03.243193 containerd[1513]: 2025-07-09 10:13:03.221 [INFO][4877] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Namespace="calico-system" Pod="goldmane-768f4c5c69-nl7w8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" Jul 9 10:13:03.243193 containerd[1513]: 2025-07-09 10:13:03.221 [INFO][4877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46a1d1435da ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Namespace="calico-system" Pod="goldmane-768f4c5c69-nl7w8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" Jul 9 10:13:03.243193 containerd[1513]: 2025-07-09 10:13:03.229 [INFO][4877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Namespace="calico-system" Pod="goldmane-768f4c5c69-nl7w8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" Jul 9 10:13:03.243193 containerd[1513]: 2025-07-09 10:13:03.229 [INFO][4877] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Namespace="calico-system" Pod="goldmane-768f4c5c69-nl7w8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5e10e32b-6714-4d36-bb91-3840e2971150", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 10, 12, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61", Pod:"goldmane-768f4c5c69-nl7w8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46a1d1435da", MAC:"b2:82:e1:88:41:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 10:13:03.243193 containerd[1513]: 2025-07-09 10:13:03.239 [INFO][4877] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" Namespace="calico-system" Pod="goldmane-768f4c5c69-nl7w8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nl7w8-eth0" Jul 9 10:13:03.265309 containerd[1513]: time="2025-07-09T10:13:03.265264264Z" level=info msg="connecting to shim a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61" address="unix:///run/containerd/s/5b741bd3e4b1e458ad8056c3d7cb92396f9ea4e9bafb7a311d6edc9e331bd8e3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:13:03.286923 systemd[1]: Started cri-containerd-a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61.scope - libcontainer container a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61. Jul 9 10:13:03.294878 kubelet[2658]: I0709 10:13:03.294850 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 10:13:03.301364 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:13:03.323156 containerd[1513]: time="2025-07-09T10:13:03.323115409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nl7w8,Uid:5e10e32b-6714-4d36-bb91-3840e2971150,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61\"" Jul 9 10:13:03.447854 systemd-networkd[1439]: calic1cc63ed48d: Gained IPv6LL Jul 9 10:13:03.831891 systemd-networkd[1439]: calieb8eb13ab4b: Gained IPv6LL Jul 9 10:13:04.102927 containerd[1513]: time="2025-07-09T10:13:04.102773584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047\" id:\"0b56e7cb07d7f8100f402d47a3f7533b0e740723285b18d75b6cccf60e078217\" pid:4969 exited_at:{seconds:1752055984 nanos:99406089}" Jul 9 10:13:04.146806 containerd[1513]: time="2025-07-09T10:13:04.146765593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047\" id:\"bfe85cf7454a6dcef3066125623b98d5050cd02f04698e181fbe5f4dc6a51884\" pid:4992 exited_at:{seconds:1752055984 nanos:146560255}" Jul 9 10:13:04.821510 containerd[1513]: time="2025-07-09T10:13:04.821466989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:04.822230 containerd[1513]: time="2025-07-09T10:13:04.822203613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 9 10:13:04.823073 containerd[1513]: time="2025-07-09T10:13:04.823047047Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:04.825091 containerd[1513]: time="2025-07-09T10:13:04.825060503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:04.825632 containerd[1513]: time="2025-07-09T10:13:04.825599990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.986931649s" Jul 9 10:13:04.825662 containerd[1513]: time="2025-07-09T10:13:04.825630833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 9 10:13:04.826544 containerd[1513]: time="2025-07-09T10:13:04.826510390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 10:13:04.830497 containerd[1513]: time="2025-07-09T10:13:04.830467376Z" level=info msg="CreateContainer within sandbox \"aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 10:13:04.835726 containerd[1513]: time="2025-07-09T10:13:04.835669271Z" level=info msg="Container 55bd3215c0703efd3069eaa4c4f96363054a107aaf76d94939a0090caf3b6d39: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:04.841300 containerd[1513]: time="2025-07-09T10:13:04.841248640Z" level=info msg="CreateContainer within sandbox \"aeead9cb3ecccad367657f8d89a53e2ca12e39fbfb178cee0378fd8215f45344\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"55bd3215c0703efd3069eaa4c4f96363054a107aaf76d94939a0090caf3b6d39\"" Jul 9 10:13:04.841955 containerd[1513]: time="2025-07-09T10:13:04.841841051Z" level=info msg="StartContainer for \"55bd3215c0703efd3069eaa4c4f96363054a107aaf76d94939a0090caf3b6d39\"" Jul 9 10:13:04.843656 containerd[1513]: time="2025-07-09T10:13:04.843628888Z" level=info msg="connecting to shim 55bd3215c0703efd3069eaa4c4f96363054a107aaf76d94939a0090caf3b6d39" address="unix:///run/containerd/s/23c7e2da750c69fa10c0967fb30b72351b6fd0b5bf90c143fe0205ffbe21a580" protocol=ttrpc version=3 Jul 9 10:13:04.871885 systemd[1]: Started cri-containerd-55bd3215c0703efd3069eaa4c4f96363054a107aaf76d94939a0090caf3b6d39.scope - libcontainer container 55bd3215c0703efd3069eaa4c4f96363054a107aaf76d94939a0090caf3b6d39. Jul 9 10:13:04.909344 containerd[1513]: time="2025-07-09T10:13:04.909234588Z" level=info msg="StartContainer for \"55bd3215c0703efd3069eaa4c4f96363054a107aaf76d94939a0090caf3b6d39\" returns successfully" Jul 9 10:13:05.047868 systemd-networkd[1439]: cali46a1d1435da: Gained IPv6LL Jul 9 10:13:05.054332 containerd[1513]: time="2025-07-09T10:13:05.054284656Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:05.054820 containerd[1513]: time="2025-07-09T10:13:05.054780498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 9 10:13:05.056521 containerd[1513]: time="2025-07-09T10:13:05.056344671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 229.801838ms" Jul 9 10:13:05.056521 containerd[1513]: time="2025-07-09T10:13:05.056378554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 9 10:13:05.057316 containerd[1513]: time="2025-07-09T10:13:05.057059251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 10:13:05.061289 containerd[1513]: time="2025-07-09T10:13:05.060792768Z" level=info msg="CreateContainer within sandbox \"60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 10:13:05.073005 containerd[1513]: time="2025-07-09T10:13:05.072896314Z" level=info msg="Container 2fcd13e37c1071f7b8cb1b2c91755111f18c7dd675f72bf8c6a0b5c0f6046ddb: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:05.086495 containerd[1513]: time="2025-07-09T10:13:05.086449703Z" level=info msg="CreateContainer within sandbox \"60c16708a643fa8848c23b432eef4ec87c48e801bc7400bf7d7f56d0dbd79329\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2fcd13e37c1071f7b8cb1b2c91755111f18c7dd675f72bf8c6a0b5c0f6046ddb\"" Jul 9 10:13:05.087329 containerd[1513]: time="2025-07-09T10:13:05.087298095Z" level=info msg="StartContainer for \"2fcd13e37c1071f7b8cb1b2c91755111f18c7dd675f72bf8c6a0b5c0f6046ddb\"" Jul 9 10:13:05.089398 containerd[1513]: time="2025-07-09T10:13:05.089374071Z" level=info msg="connecting to shim 2fcd13e37c1071f7b8cb1b2c91755111f18c7dd675f72bf8c6a0b5c0f6046ddb" address="unix:///run/containerd/s/24af0e7ccb24b0c5bcd015a950c8d2f7a83c42e1b024ee1464ef1ffe9dd7c082" protocol=ttrpc version=3 Jul 9 10:13:05.110875 systemd[1]: Started cri-containerd-2fcd13e37c1071f7b8cb1b2c91755111f18c7dd675f72bf8c6a0b5c0f6046ddb.scope - libcontainer container 2fcd13e37c1071f7b8cb1b2c91755111f18c7dd675f72bf8c6a0b5c0f6046ddb. Jul 9 10:13:05.156119 containerd[1513]: time="2025-07-09T10:13:05.156063203Z" level=info msg="StartContainer for \"2fcd13e37c1071f7b8cb1b2c91755111f18c7dd675f72bf8c6a0b5c0f6046ddb\" returns successfully" Jul 9 10:13:05.346728 kubelet[2658]: I0709 10:13:05.345942 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57ccb5d897-tkhzm" podStartSLOduration=26.823435193 podStartE2EDuration="29.345924497s" podCreationTimestamp="2025-07-09 10:12:36 +0000 UTC" firstStartedPulling="2025-07-09 10:13:02.53447934 +0000 UTC m=+40.506925204" lastFinishedPulling="2025-07-09 10:13:05.056968644 +0000 UTC m=+43.029414508" observedRunningTime="2025-07-09 10:13:05.33392048 +0000 UTC m=+43.306366344" watchObservedRunningTime="2025-07-09 10:13:05.345924497 +0000 UTC m=+43.318370361" Jul 9 10:13:05.348131 kubelet[2658]: I0709 10:13:05.347732 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57ccb5d897-zj5gr" podStartSLOduration=27.025050781 podStartE2EDuration="29.347719889s" podCreationTimestamp="2025-07-09 10:12:36 +0000 UTC" firstStartedPulling="2025-07-09 10:13:02.50370255 +0000 UTC m=+40.476148414" lastFinishedPulling="2025-07-09 10:13:04.826371658 +0000 UTC m=+42.798817522" observedRunningTime="2025-07-09 10:13:05.345383851 +0000 UTC m=+43.317829715" watchObservedRunningTime="2025-07-09 10:13:05.347719889 +0000 UTC m=+43.320165713" Jul 9 10:13:05.599978 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:44194.service - OpenSSH per-connection server daemon (10.0.0.1:44194). Jul 9 10:13:05.680406 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 44194 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:05.683224 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:05.689410 systemd-logind[1485]: New session 9 of user core. Jul 9 10:13:05.696021 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 10:13:05.897221 sshd[5100]: Connection closed by 10.0.0.1 port 44194 Jul 9 10:13:05.897533 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:05.904893 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:44194.service: Deactivated successfully. Jul 9 10:13:05.909319 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 10:13:05.911186 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Jul 9 10:13:05.913018 systemd-logind[1485]: Removed session 9. Jul 9 10:13:07.190381 containerd[1513]: time="2025-07-09T10:13:07.190322565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:07.194045 containerd[1513]: time="2025-07-09T10:13:07.194006778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 9 10:13:07.195070 containerd[1513]: time="2025-07-09T10:13:07.195045541Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:07.196894 containerd[1513]: time="2025-07-09T10:13:07.196845284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:07.197522 containerd[1513]: time="2025-07-09T10:13:07.197486095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.140376359s" Jul 9 10:13:07.197522 containerd[1513]: time="2025-07-09T10:13:07.197521138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 9 10:13:07.199241 containerd[1513]: time="2025-07-09T10:13:07.199211232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 9 10:13:07.203298 containerd[1513]: time="2025-07-09T10:13:07.202849282Z" level=info msg="CreateContainer within sandbox \"a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 10:13:07.218936 containerd[1513]: time="2025-07-09T10:13:07.218885958Z" level=info msg="Container 1b13f8b6168005342ff2c7916a2618fb9bd93a3e551e3afa3cb7ebd876257d9e: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:07.226603 containerd[1513]: time="2025-07-09T10:13:07.226557728Z" level=info msg="CreateContainer within sandbox \"a1ca19a823c1a23bdd6736bd18e7a46b1dd27ce3d438fcfdfd1e0577bb12920a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1b13f8b6168005342ff2c7916a2618fb9bd93a3e551e3afa3cb7ebd876257d9e\"" Jul 9 10:13:07.228451 containerd[1513]: time="2025-07-09T10:13:07.227169937Z" level=info msg="StartContainer for \"1b13f8b6168005342ff2c7916a2618fb9bd93a3e551e3afa3cb7ebd876257d9e\"" Jul 9 10:13:07.228703 containerd[1513]: time="2025-07-09T10:13:07.228671816Z" level=info msg="connecting to shim 1b13f8b6168005342ff2c7916a2618fb9bd93a3e551e3afa3cb7ebd876257d9e" address="unix:///run/containerd/s/ece4e9fd0d85b7d5704b9aa3088a4e8e4dff2a1688cf1e98beb8d57e1797192d" protocol=ttrpc version=3 Jul 9 10:13:07.262959 systemd[1]: Started cri-containerd-1b13f8b6168005342ff2c7916a2618fb9bd93a3e551e3afa3cb7ebd876257d9e.scope - libcontainer container 1b13f8b6168005342ff2c7916a2618fb9bd93a3e551e3afa3cb7ebd876257d9e. Jul 9 10:13:07.404254 containerd[1513]: time="2025-07-09T10:13:07.404211620Z" level=info msg="StartContainer for \"1b13f8b6168005342ff2c7916a2618fb9bd93a3e551e3afa3cb7ebd876257d9e\" returns successfully" Jul 9 10:13:08.187753 kubelet[2658]: I0709 10:13:08.187692 2658 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 10:13:08.187753 kubelet[2658]: I0709 10:13:08.187754 2658 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 10:13:09.170822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236991652.mount: Deactivated successfully. Jul 9 10:13:09.812441 containerd[1513]: time="2025-07-09T10:13:09.811957062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:09.813910 containerd[1513]: time="2025-07-09T10:13:09.813883565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 9 10:13:09.815022 containerd[1513]: time="2025-07-09T10:13:09.815000049Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:09.817611 containerd[1513]: time="2025-07-09T10:13:09.817504956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:13:09.818226 containerd[1513]: time="2025-07-09T10:13:09.818200088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.618958573s" Jul 9 10:13:09.818400 containerd[1513]: time="2025-07-09T10:13:09.818313816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 9 10:13:09.824693 containerd[1513]: time="2025-07-09T10:13:09.824662450Z" level=info msg="CreateContainer within sandbox \"a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 9 10:13:09.842938 containerd[1513]: time="2025-07-09T10:13:09.842881050Z" level=info msg="Container 7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:09.854212 containerd[1513]: time="2025-07-09T10:13:09.854164253Z" level=info msg="CreateContainer within sandbox \"a6baeaa7628a652b92b564b89d6386c4ab5442035e5c03bd69ff7b8e93707f61\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a\"" Jul 9 10:13:09.855097 containerd[1513]: time="2025-07-09T10:13:09.855073240Z" level=info msg="StartContainer for \"7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a\"" Jul 9 10:13:09.856333 containerd[1513]: time="2025-07-09T10:13:09.856300852Z" level=info msg="connecting to shim 7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a" address="unix:///run/containerd/s/5b741bd3e4b1e458ad8056c3d7cb92396f9ea4e9bafb7a311d6edc9e331bd8e3" protocol=ttrpc version=3 Jul 9 10:13:09.876844 systemd[1]: Started cri-containerd-7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a.scope - libcontainer container 7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a. Jul 9 10:13:09.911005 containerd[1513]: time="2025-07-09T10:13:09.910971054Z" level=info msg="StartContainer for \"7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a\" returns successfully" Jul 9 10:13:10.430839 kubelet[2658]: I0709 10:13:10.430757 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gp5rv" podStartSLOduration=22.833077331 podStartE2EDuration="30.430737134s" podCreationTimestamp="2025-07-09 10:12:40 +0000 UTC" firstStartedPulling="2025-07-09 10:12:59.600647998 +0000 UTC m=+37.573093822" lastFinishedPulling="2025-07-09 10:13:07.198307761 +0000 UTC m=+45.170753625" observedRunningTime="2025-07-09 10:13:08.428426394 +0000 UTC m=+46.400872258" watchObservedRunningTime="2025-07-09 10:13:10.430737134 +0000 UTC m=+48.403182998" Jul 9 10:13:10.431454 kubelet[2658]: I0709 10:13:10.431235 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-nl7w8" podStartSLOduration=24.936501815 podStartE2EDuration="31.43122693s" podCreationTimestamp="2025-07-09 10:12:39 +0000 UTC" firstStartedPulling="2025-07-09 10:13:03.324188826 +0000 UTC m=+41.296634690" lastFinishedPulling="2025-07-09 10:13:09.818913941 +0000 UTC m=+47.791359805" observedRunningTime="2025-07-09 10:13:10.430225377 +0000 UTC m=+48.402671201" watchObservedRunningTime="2025-07-09 10:13:10.43122693 +0000 UTC m=+48.403672794" Jul 9 10:13:10.556997 containerd[1513]: time="2025-07-09T10:13:10.556945222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a\" id:\"a0b0ed8415c9f1d379a86b68619685f39377eac1e52d61362c28a30a9b55d1a5\" pid:5218 exit_status:1 exited_at:{seconds:1752055990 nanos:556554314}" Jul 9 10:13:10.915039 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:44202.service - OpenSSH per-connection server daemon (10.0.0.1:44202). Jul 9 10:13:10.974415 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 44202 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:10.976113 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:10.980253 systemd-logind[1485]: New session 10 of user core. Jul 9 10:13:10.987926 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 10:13:11.217829 sshd[5236]: Connection closed by 10.0.0.1 port 44202 Jul 9 10:13:11.218345 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:11.229052 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:44202.service: Deactivated successfully. Jul 9 10:13:11.230885 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 10:13:11.232370 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Jul 9 10:13:11.234155 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:44214.service - OpenSSH per-connection server daemon (10.0.0.1:44214). Jul 9 10:13:11.235080 systemd-logind[1485]: Removed session 10. Jul 9 10:13:11.283766 sshd[5250]: Accepted publickey for core from 10.0.0.1 port 44214 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:11.285263 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:11.291009 systemd-logind[1485]: New session 11 of user core. Jul 9 10:13:11.301085 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 10:13:11.520309 sshd[5253]: Connection closed by 10.0.0.1 port 44214 Jul 9 10:13:11.519668 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:11.535990 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:44214.service: Deactivated successfully. Jul 9 10:13:11.537873 containerd[1513]: time="2025-07-09T10:13:11.537835790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a\" id:\"6e4f0de66a6cf5c86a420eaa3ed2526ae7ac8b812ea65465a7fe8751dfc7a4f8\" pid:5272 exit_status:1 exited_at:{seconds:1752055991 nanos:537210986}" Jul 9 10:13:11.541418 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 10:13:11.543411 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Jul 9 10:13:11.549620 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:44230.service - OpenSSH per-connection server daemon (10.0.0.1:44230). Jul 9 10:13:11.550813 systemd-logind[1485]: Removed session 11. Jul 9 10:13:11.613650 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 44230 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:11.615141 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:11.619262 systemd-logind[1485]: New session 12 of user core. Jul 9 10:13:11.628987 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 10:13:11.802305 sshd[5291]: Connection closed by 10.0.0.1 port 44230 Jul 9 10:13:11.802643 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:11.808391 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:44230.service: Deactivated successfully. Jul 9 10:13:11.811983 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 10:13:11.813397 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Jul 9 10:13:11.815226 systemd-logind[1485]: Removed session 12. Jul 9 10:13:16.817286 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:56188.service - OpenSSH per-connection server daemon (10.0.0.1:56188). Jul 9 10:13:16.875503 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 56188 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:16.875993 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:16.879793 systemd-logind[1485]: New session 13 of user core. Jul 9 10:13:16.889892 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 10:13:17.017055 sshd[5322]: Connection closed by 10.0.0.1 port 56188 Jul 9 10:13:17.017403 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:17.020585 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:56188.service: Deactivated successfully. Jul 9 10:13:17.022411 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 10:13:17.024163 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Jul 9 10:13:17.025118 systemd-logind[1485]: Removed session 13. Jul 9 10:13:22.032975 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:56198.service - OpenSSH per-connection server daemon (10.0.0.1:56198). Jul 9 10:13:22.093234 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 56198 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:22.094401 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:22.098909 systemd-logind[1485]: New session 14 of user core. Jul 9 10:13:22.112877 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 10:13:22.251203 sshd[5340]: Connection closed by 10.0.0.1 port 56198 Jul 9 10:13:22.251533 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:22.254380 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:56198.service: Deactivated successfully. Jul 9 10:13:22.256118 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 10:13:22.258884 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Jul 9 10:13:22.259963 systemd-logind[1485]: Removed session 14. Jul 9 10:13:27.096996 containerd[1513]: time="2025-07-09T10:13:27.096937614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75f9dcef2e1db694f9fbcbc8993c25f94c51ba63181b2eb365517e4938f493af\" id:\"74118b6174bad6b54c92329dc7c0b8c370d5fa733f6a5fbef45e0f0877ba8f76\" pid:5367 exited_at:{seconds:1752056007 nanos:96614201}" Jul 9 10:13:27.266931 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:48524.service - OpenSSH per-connection server daemon (10.0.0.1:48524). Jul 9 10:13:27.329992 sshd[5380]: Accepted publickey for core from 10.0.0.1 port 48524 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:27.331367 sshd-session[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:27.334846 systemd-logind[1485]: New session 15 of user core. Jul 9 10:13:27.349874 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 10:13:27.503311 sshd[5383]: Connection closed by 10.0.0.1 port 48524 Jul 9 10:13:27.504007 sshd-session[5380]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:27.507452 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:48524.service: Deactivated successfully. Jul 9 10:13:27.509181 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 10:13:27.509861 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Jul 9 10:13:27.510899 systemd-logind[1485]: Removed session 15. Jul 9 10:13:32.514207 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:53660.service - OpenSSH per-connection server daemon (10.0.0.1:53660). Jul 9 10:13:32.591755 sshd[5401]: Accepted publickey for core from 10.0.0.1 port 53660 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:32.596045 sshd-session[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:32.602964 systemd-logind[1485]: New session 16 of user core. Jul 9 10:13:32.608904 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 10:13:32.821750 sshd[5404]: Connection closed by 10.0.0.1 port 53660 Jul 9 10:13:32.822286 sshd-session[5401]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:32.833848 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:53660.service: Deactivated successfully. Jul 9 10:13:32.836535 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 10:13:32.837870 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Jul 9 10:13:32.840118 systemd-logind[1485]: Removed session 16. Jul 9 10:13:32.842399 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:53672.service - OpenSSH per-connection server daemon (10.0.0.1:53672). Jul 9 10:13:32.891781 sshd[5417]: Accepted publickey for core from 10.0.0.1 port 53672 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:32.892950 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:32.897255 systemd-logind[1485]: New session 17 of user core. Jul 9 10:13:32.911877 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 10:13:33.105424 sshd[5420]: Connection closed by 10.0.0.1 port 53672 Jul 9 10:13:33.105283 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:33.115315 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:53672.service: Deactivated successfully. Jul 9 10:13:33.117849 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 10:13:33.119036 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Jul 9 10:13:33.121808 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:53688.service - OpenSSH per-connection server daemon (10.0.0.1:53688). Jul 9 10:13:33.122511 systemd-logind[1485]: Removed session 17. Jul 9 10:13:33.169667 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 53688 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:33.170468 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:33.174697 systemd-logind[1485]: New session 18 of user core. Jul 9 10:13:33.187885 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 10:13:34.007139 sshd[5435]: Connection closed by 10.0.0.1 port 53688 Jul 9 10:13:34.008375 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:34.014983 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:53688.service: Deactivated successfully. Jul 9 10:13:34.017310 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 10:13:34.019690 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Jul 9 10:13:34.022247 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:53698.service - OpenSSH per-connection server daemon (10.0.0.1:53698). Jul 9 10:13:34.023448 systemd-logind[1485]: Removed session 18. Jul 9 10:13:34.091686 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 53698 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:34.092837 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:34.098338 systemd-logind[1485]: New session 19 of user core. Jul 9 10:13:34.104033 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 10:13:34.176215 containerd[1513]: time="2025-07-09T10:13:34.176161326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047\" id:\"0435c1f0723dcf493103e1d3df6a4e69037b462d9225695a63114531456d1d2d\" pid:5471 exited_at:{seconds:1752056014 nanos:175813635}" Jul 9 10:13:34.452331 sshd[5458]: Connection closed by 10.0.0.1 port 53698 Jul 9 10:13:34.453935 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:34.461288 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:53698.service: Deactivated successfully. Jul 9 10:13:34.464635 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 10:13:34.465430 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Jul 9 10:13:34.467521 systemd-logind[1485]: Removed session 19. Jul 9 10:13:34.469930 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:53708.service - OpenSSH per-connection server daemon (10.0.0.1:53708). Jul 9 10:13:34.530718 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 53708 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:34.532629 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:34.537073 systemd-logind[1485]: New session 20 of user core. Jul 9 10:13:34.544909 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 10:13:34.713310 sshd[5496]: Connection closed by 10.0.0.1 port 53708 Jul 9 10:13:34.713600 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:34.717514 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:53708.service: Deactivated successfully. Jul 9 10:13:34.719464 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 10:13:34.720246 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Jul 9 10:13:34.721369 systemd-logind[1485]: Removed session 20. Jul 9 10:13:39.729606 systemd[1]: Started sshd@20-10.0.0.141:22-10.0.0.1:53716.service - OpenSSH per-connection server daemon (10.0.0.1:53716). Jul 9 10:13:39.782556 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 53716 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:39.783896 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:39.788441 systemd-logind[1485]: New session 21 of user core. Jul 9 10:13:39.801956 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 10:13:39.980132 sshd[5520]: Connection closed by 10.0.0.1 port 53716 Jul 9 10:13:39.980656 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:39.984631 systemd[1]: sshd@20-10.0.0.141:22-10.0.0.1:53716.service: Deactivated successfully. Jul 9 10:13:39.987326 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 10:13:39.988124 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Jul 9 10:13:39.989291 systemd-logind[1485]: Removed session 21. Jul 9 10:13:40.211198 containerd[1513]: time="2025-07-09T10:13:40.211152645Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2abf7816be4886c24f4c3c8d7ef3601743abe44a24eb10a2acc8d062d8c34047\" id:\"37c98c6ecde6ef010c08841b3c42ae360e73cf9404e5fc6dcf87496753f76d4f\" pid:5546 exited_at:{seconds:1752056020 nanos:210942759}" Jul 9 10:13:41.546840 containerd[1513]: time="2025-07-09T10:13:41.546800598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a\" id:\"03849f803e60bd805a5ca87dc6270354238045cc5f4800023562f76e6a192b7c\" pid:5568 exited_at:{seconds:1752056021 nanos:546510510}" Jul 9 10:13:42.578523 containerd[1513]: time="2025-07-09T10:13:42.578475557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7438cc5302882d82ca983dc07d30b306dc1d1a1f303b6ea33a36b0069315bd0a\" id:\"4057f554daea36e5c142767da98bec72237c3827138b319b8a9045e4189308ce\" pid:5594 exited_at:{seconds:1752056022 nanos:578152348}" Jul 9 10:13:44.992038 systemd[1]: Started sshd@21-10.0.0.141:22-10.0.0.1:37432.service - OpenSSH per-connection server daemon (10.0.0.1:37432). Jul 9 10:13:45.053812 sshd[5608]: Accepted publickey for core from 10.0.0.1 port 37432 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:45.055099 sshd-session[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:45.059063 systemd-logind[1485]: New session 22 of user core. Jul 9 10:13:45.068068 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 10:13:45.252949 sshd[5611]: Connection closed by 10.0.0.1 port 37432 Jul 9 10:13:45.252263 sshd-session[5608]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:45.256175 systemd[1]: sshd@21-10.0.0.141:22-10.0.0.1:37432.service: Deactivated successfully. Jul 9 10:13:45.259826 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 10:13:45.260918 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Jul 9 10:13:45.262216 systemd-logind[1485]: Removed session 22.