Jul 6 23:36:30.844896 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:36:30.845038 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:36:30.845049 kernel: KASLR enabled Jul 6 23:36:30.845055 kernel: efi: EFI v2.7 by EDK II Jul 6 23:36:30.845061 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Jul 6 23:36:30.845066 kernel: random: crng init done Jul 6 23:36:30.845073 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 6 23:36:30.845079 kernel: secureboot: Secure boot enabled Jul 6 23:36:30.845085 kernel: ACPI: Early table checksum verification disabled Jul 6 23:36:30.845093 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 6 23:36:30.845100 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:36:30.845105 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845111 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845117 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845124 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845132 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845138 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845144 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845151 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845157 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:36:30.845163 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 6 23:36:30.845169 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:36:30.845175 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:36:30.845181 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Jul 6 23:36:30.845187 kernel: Zone ranges: Jul 6 23:36:30.845195 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:36:30.845201 kernel: DMA32 empty Jul 6 23:36:30.845207 kernel: Normal empty Jul 6 23:36:30.845212 kernel: Device empty Jul 6 23:36:30.845218 kernel: Movable zone start for each node Jul 6 23:36:30.845224 kernel: Early memory node ranges Jul 6 23:36:30.845230 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 6 23:36:30.845236 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 6 23:36:30.845242 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 6 23:36:30.845248 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 6 23:36:30.845254 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 6 23:36:30.845260 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 6 23:36:30.845267 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 6 23:36:30.845273 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 6 23:36:30.845279 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 6 23:36:30.845288 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:36:30.845295 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 6 23:36:30.845301 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jul 6 23:36:30.845307 kernel: psci: probing for conduit method from ACPI. Jul 6 23:36:30.845315 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:36:30.845321 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:36:30.845328 kernel: psci: Trusted OS migration not required Jul 6 23:36:30.845334 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:36:30.845341 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:36:30.845347 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:36:30.845354 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:36:30.845360 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 6 23:36:30.845367 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:36:30.845375 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:36:30.845381 kernel: CPU features: detected: Spectre-v4 Jul 6 23:36:30.845388 kernel: CPU features: detected: Spectre-BHB Jul 6 23:36:30.845394 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:36:30.845401 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:36:30.845407 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:36:30.845413 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:36:30.845420 kernel: alternatives: applying boot alternatives Jul 6 23:36:30.845427 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:36:30.845434 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:36:30.845440 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:36:30.845448 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:36:30.845454 kernel: Fallback order for Node 0: 0 Jul 6 23:36:30.845461 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 6 23:36:30.845467 kernel: Policy zone: DMA Jul 6 23:36:30.845473 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:36:30.845480 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 6 23:36:30.845486 kernel: software IO TLB: area num 4. Jul 6 23:36:30.845492 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 6 23:36:30.845499 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 6 23:36:30.845505 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:36:30.845511 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:36:30.845518 kernel: rcu: RCU event tracing is enabled. Jul 6 23:36:30.845526 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:36:30.845533 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:36:30.845539 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:36:30.845545 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:36:30.845552 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:36:30.845558 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:36:30.845565 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:36:30.845571 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:36:30.845577 kernel: GICv3: 256 SPIs implemented Jul 6 23:36:30.845584 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:36:30.845590 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:36:30.845597 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:36:30.845604 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:36:30.845616 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:36:30.845623 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:36:30.845629 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:36:30.845639 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:36:30.845648 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 6 23:36:30.845657 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 6 23:36:30.845663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:36:30.845670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:36:30.845677 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:36:30.845683 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:36:30.845692 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:36:30.845699 kernel: arm-pv: using stolen time PV Jul 6 23:36:30.845705 kernel: Console: colour dummy device 80x25 Jul 6 23:36:30.845712 kernel: ACPI: Core revision 20240827 Jul 6 23:36:30.845719 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:36:30.845726 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:36:30.845732 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:36:30.845739 kernel: landlock: Up and running. Jul 6 23:36:30.845746 kernel: SELinux: Initializing. Jul 6 23:36:30.845753 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:36:30.845760 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:36:30.845767 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:36:30.845774 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:36:30.845781 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:36:30.845788 kernel: Remapping and enabling EFI services. Jul 6 23:36:30.845794 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:36:30.845801 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:36:30.845807 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:36:30.845814 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 6 23:36:30.845827 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:36:30.845834 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:36:30.845842 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:36:30.845849 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 6 23:36:30.845857 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 6 23:36:30.845864 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:36:30.845871 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 6 23:36:30.845878 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:36:30.845886 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 6 23:36:30.845893 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 6 23:36:30.845900 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:36:30.845921 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 6 23:36:30.845928 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:36:30.845935 kernel: SMP: Total of 4 processors activated. Jul 6 23:36:30.845942 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:36:30.845949 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:36:30.845956 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:36:30.845966 kernel: CPU features: detected: Common not Private translations Jul 6 23:36:30.845972 kernel: CPU features: detected: CRC32 instructions Jul 6 23:36:30.845979 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:36:30.845986 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:36:30.845993 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:36:30.846000 kernel: CPU features: detected: Privileged Access Never Jul 6 23:36:30.846013 kernel: CPU features: detected: RAS Extension Support Jul 6 23:36:30.846020 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:36:30.846026 kernel: alternatives: applying system-wide alternatives Jul 6 23:36:30.846035 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 6 23:36:30.846043 kernel: Memory: 2421860K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 128092K reserved, 16384K cma-reserved) Jul 6 23:36:30.846050 kernel: devtmpfs: initialized Jul 6 23:36:30.846056 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:36:30.846064 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:36:30.846071 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:36:30.846078 kernel: 0 pages in range for non-PLT usage Jul 6 23:36:30.846085 kernel: 508432 pages in range for PLT usage Jul 6 23:36:30.846092 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:36:30.846100 kernel: SMBIOS 3.0.0 present. Jul 6 23:36:30.846107 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 6 23:36:30.846114 kernel: DMI: Memory slots populated: 1/1 Jul 6 23:36:30.846121 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:36:30.846128 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:36:30.846135 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:36:30.846142 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:36:30.846149 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:36:30.846156 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Jul 6 23:36:30.846165 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:36:30.846171 kernel: cpuidle: using governor menu Jul 6 23:36:30.846178 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:36:30.846185 kernel: ASID allocator initialised with 32768 entries Jul 6 23:36:30.846192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:36:30.846199 kernel: Serial: AMBA PL011 UART driver Jul 6 23:36:30.846206 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:36:30.846213 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:36:30.846220 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:36:30.846228 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:36:30.846235 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:36:30.846242 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:36:30.846249 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:36:30.846256 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:36:30.846263 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:36:30.846270 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:36:30.846276 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:36:30.846283 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:36:30.846292 kernel: ACPI: Interpreter enabled Jul 6 23:36:30.846299 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:36:30.846306 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:36:30.846312 kernel: ACPI: CPU0 has been hot-added Jul 6 23:36:30.846319 kernel: ACPI: CPU1 has been hot-added Jul 6 23:36:30.846326 kernel: ACPI: CPU2 has been hot-added Jul 6 23:36:30.846333 kernel: ACPI: CPU3 has been hot-added Jul 6 23:36:30.846340 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:36:30.846347 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:36:30.846356 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:36:30.846499 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:36:30.846568 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:36:30.846628 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:36:30.846697 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:36:30.846755 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:36:30.846765 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:36:30.846774 kernel: PCI host bridge to bus 0000:00 Jul 6 23:36:30.846841 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:36:30.846898 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:36:30.846974 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:36:30.847042 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:36:30.847132 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:36:30.847204 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 6 23:36:30.847270 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 6 23:36:30.847332 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 6 23:36:30.847393 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:36:30.847453 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 6 23:36:30.847514 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 6 23:36:30.847575 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 6 23:36:30.847629 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:36:30.847686 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:36:30.847740 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:36:30.847749 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:36:30.847756 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:36:30.847764 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:36:30.847771 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:36:30.847778 kernel: iommu: Default domain type: Translated Jul 6 23:36:30.847785 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:36:30.847794 kernel: efivars: Registered efivars operations Jul 6 23:36:30.847801 kernel: vgaarb: loaded Jul 6 23:36:30.847808 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:36:30.847815 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:36:30.847822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:36:30.847829 kernel: pnp: PnP ACPI init Jul 6 23:36:30.847895 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:36:30.847916 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:36:30.847941 kernel: NET: Registered PF_INET protocol family Jul 6 23:36:30.847949 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:36:30.847956 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:36:30.847964 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:36:30.847972 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:36:30.847979 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:36:30.847986 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:36:30.847993 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:36:30.848001 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:36:30.848018 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:36:30.848025 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:36:30.848032 kernel: kvm [1]: HYP mode not available Jul 6 23:36:30.848039 kernel: Initialise system trusted keyrings Jul 6 23:36:30.848046 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:36:30.848053 kernel: Key type asymmetric registered Jul 6 23:36:30.848060 kernel: Asymmetric key parser 'x509' registered Jul 6 23:36:30.848067 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:36:30.848074 kernel: io scheduler mq-deadline registered Jul 6 23:36:30.848083 kernel: io scheduler kyber registered Jul 6 23:36:30.848090 kernel: io scheduler bfq registered Jul 6 23:36:30.848097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:36:30.848104 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:36:30.848112 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:36:30.848186 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 6 23:36:30.848196 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:36:30.848203 kernel: thunder_xcv, ver 1.0 Jul 6 23:36:30.848210 kernel: thunder_bgx, ver 1.0 Jul 6 23:36:30.848220 kernel: nicpf, ver 1.0 Jul 6 23:36:30.848227 kernel: nicvf, ver 1.0 Jul 6 23:36:30.848299 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:36:30.848358 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:36:30 UTC (1751844990) Jul 6 23:36:30.848367 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:36:30.848375 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:36:30.848382 kernel: watchdog: NMI not fully supported Jul 6 23:36:30.848389 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:36:30.848398 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:36:30.848405 kernel: Segment Routing with IPv6 Jul 6 23:36:30.848412 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:36:30.848419 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:36:30.848426 kernel: Key type dns_resolver registered Jul 6 23:36:30.848433 kernel: registered taskstats version 1 Jul 6 23:36:30.848440 kernel: Loading compiled-in X.509 certificates Jul 6 23:36:30.848447 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:36:30.848454 kernel: Demotion targets for Node 0: null Jul 6 23:36:30.848463 kernel: Key type .fscrypt registered Jul 6 23:36:30.848470 kernel: Key type fscrypt-provisioning registered Jul 6 23:36:30.848477 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:36:30.848484 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:36:30.848491 kernel: ima: No architecture policies found Jul 6 23:36:30.848498 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:36:30.848505 kernel: clk: Disabling unused clocks Jul 6 23:36:30.848512 kernel: PM: genpd: Disabling unused power domains Jul 6 23:36:30.848520 kernel: Warning: unable to open an initial console. Jul 6 23:36:30.848528 kernel: Freeing unused kernel memory: 39488K Jul 6 23:36:30.848535 kernel: Run /init as init process Jul 6 23:36:30.848542 kernel: with arguments: Jul 6 23:36:30.848549 kernel: /init Jul 6 23:36:30.848556 kernel: with environment: Jul 6 23:36:30.848563 kernel: HOME=/ Jul 6 23:36:30.848570 kernel: TERM=linux Jul 6 23:36:30.848577 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:36:30.848585 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:36:30.848597 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:36:30.848605 systemd[1]: Detected virtualization kvm. Jul 6 23:36:30.848612 systemd[1]: Detected architecture arm64. Jul 6 23:36:30.848619 systemd[1]: Running in initrd. Jul 6 23:36:30.848627 systemd[1]: No hostname configured, using default hostname. Jul 6 23:36:30.848635 systemd[1]: Hostname set to . Jul 6 23:36:30.848642 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:36:30.848651 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:36:30.848658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:36:30.848666 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:36:30.848674 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:36:30.848682 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:36:30.848689 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:36:30.848698 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:36:30.848708 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:36:30.848716 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:36:30.848724 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:36:30.848731 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:36:30.848739 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:36:30.848747 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:36:30.848755 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:36:30.848763 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:36:30.848771 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:36:30.848779 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:36:30.848786 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:36:30.848794 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:36:30.848801 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:36:30.848809 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:36:30.848816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:36:30.848824 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:36:30.848832 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:36:30.848841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:36:30.848849 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:36:30.848857 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:36:30.848865 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:36:30.848873 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:36:30.848881 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:36:30.848888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:36:30.848896 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:36:30.848925 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:36:30.848936 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:36:30.848945 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:36:30.848973 systemd-journald[244]: Collecting audit messages is disabled. Jul 6 23:36:30.848996 systemd-journald[244]: Journal started Jul 6 23:36:30.849022 systemd-journald[244]: Runtime Journal (/run/log/journal/fbf1543d23c14d5c873b9fc87d2364e9) is 6M, max 48.5M, 42.4M free. Jul 6 23:36:30.843008 systemd-modules-load[247]: Inserted module 'overlay' Jul 6 23:36:30.859160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:36:30.862422 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:36:30.862460 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:36:30.867270 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:36:30.869181 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:36:30.871728 systemd-modules-load[247]: Inserted module 'br_netfilter' Jul 6 23:36:30.873722 kernel: Bridge firewalling registered Jul 6 23:36:30.874256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:36:30.881122 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:36:30.883800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:36:30.885433 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:36:30.890432 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:36:30.895987 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:36:30.899764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:36:30.902269 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:36:30.903703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:36:30.907334 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:36:30.909989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:36:30.935783 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:36:30.950637 systemd-resolved[287]: Positive Trust Anchors: Jul 6 23:36:30.950657 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:36:30.950688 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:36:30.958422 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 6 23:36:30.959433 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:36:30.960668 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:36:31.020974 kernel: SCSI subsystem initialized Jul 6 23:36:31.025922 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:36:31.033945 kernel: iscsi: registered transport (tcp) Jul 6 23:36:31.046931 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:36:31.046954 kernel: QLogic iSCSI HBA Driver Jul 6 23:36:31.065766 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:36:31.087962 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:36:31.089822 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:36:31.148766 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:36:31.151363 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:36:31.221946 kernel: raid6: neonx8 gen() 15541 MB/s Jul 6 23:36:31.238930 kernel: raid6: neonx4 gen() 15811 MB/s Jul 6 23:36:31.255931 kernel: raid6: neonx2 gen() 13245 MB/s Jul 6 23:36:31.272926 kernel: raid6: neonx1 gen() 10476 MB/s Jul 6 23:36:31.289929 kernel: raid6: int64x8 gen() 6896 MB/s Jul 6 23:36:31.306935 kernel: raid6: int64x4 gen() 7354 MB/s Jul 6 23:36:31.323923 kernel: raid6: int64x2 gen() 6105 MB/s Jul 6 23:36:31.341041 kernel: raid6: int64x1 gen() 5049 MB/s Jul 6 23:36:31.341062 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s Jul 6 23:36:31.359065 kernel: raid6: .... xor() 12319 MB/s, rmw enabled Jul 6 23:36:31.359096 kernel: raid6: using neon recovery algorithm Jul 6 23:36:31.363927 kernel: xor: measuring software checksum speed Jul 6 23:36:31.365287 kernel: 8regs : 18022 MB/sec Jul 6 23:36:31.365307 kernel: 32regs : 21658 MB/sec Jul 6 23:36:31.366030 kernel: arm64_neon : 28022 MB/sec Jul 6 23:36:31.366045 kernel: xor: using function: arm64_neon (28022 MB/sec) Jul 6 23:36:31.425940 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:36:31.431662 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:36:31.434192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:36:31.461953 systemd-udevd[496]: Using default interface naming scheme 'v255'. Jul 6 23:36:31.466140 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:36:31.468657 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:36:31.494681 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Jul 6 23:36:31.518019 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:36:31.520331 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:36:31.573542 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:36:31.576205 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:36:31.623370 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 6 23:36:31.623523 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:36:31.644700 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:36:31.644760 kernel: GPT:9289727 != 19775487 Jul 6 23:36:31.644769 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:36:31.644778 kernel: GPT:9289727 != 19775487 Jul 6 23:36:31.645017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:36:31.646701 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:36:31.646720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:36:31.645129 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:36:31.649257 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:36:31.650983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:36:31.679736 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:36:31.681231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:36:31.692589 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:36:31.695464 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:36:31.722754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:36:31.729309 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:36:31.730571 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:36:31.733083 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:36:31.736083 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:36:31.738268 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:36:31.741214 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:36:31.743059 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:36:31.759042 disk-uuid[589]: Primary Header is updated. Jul 6 23:36:31.759042 disk-uuid[589]: Secondary Entries is updated. Jul 6 23:36:31.759042 disk-uuid[589]: Secondary Header is updated. Jul 6 23:36:31.765925 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:36:31.766024 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:36:32.777938 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:36:32.777993 disk-uuid[593]: The operation has completed successfully. Jul 6 23:36:32.805235 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:36:32.805337 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:36:32.831819 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:36:32.858066 sh[609]: Success Jul 6 23:36:32.873985 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:36:32.874045 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:36:32.874058 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:36:32.881380 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:36:32.909852 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:36:32.912736 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:36:32.930441 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:36:32.938908 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:36:32.938953 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (622) Jul 6 23:36:32.940362 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:36:32.941437 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:36:32.941453 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:36:32.945755 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:36:32.947124 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:36:32.948639 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:36:32.949444 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:36:32.951127 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:36:32.979749 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (654) Jul 6 23:36:32.979798 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:36:32.979815 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:36:32.980941 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:36:32.987932 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:36:32.988355 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:36:32.990694 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:36:33.060988 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:36:33.064133 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:36:33.123577 systemd-networkd[797]: lo: Link UP Jul 6 23:36:33.123591 systemd-networkd[797]: lo: Gained carrier Jul 6 23:36:33.124371 systemd-networkd[797]: Enumeration completed Jul 6 23:36:33.124463 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:36:33.125854 systemd[1]: Reached target network.target - Network. Jul 6 23:36:33.126149 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:36:33.126153 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:36:33.127989 systemd-networkd[797]: eth0: Link UP Jul 6 23:36:33.127993 systemd-networkd[797]: eth0: Gained carrier Jul 6 23:36:33.128011 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:36:33.166126 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:36:33.208971 ignition[701]: Ignition 2.21.0 Jul 6 23:36:33.208982 ignition[701]: Stage: fetch-offline Jul 6 23:36:33.209021 ignition[701]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:36:33.209030 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:36:33.209214 ignition[701]: parsed url from cmdline: "" Jul 6 23:36:33.209217 ignition[701]: no config URL provided Jul 6 23:36:33.209221 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:36:33.209227 ignition[701]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:36:33.209249 ignition[701]: op(1): [started] loading QEMU firmware config module Jul 6 23:36:33.209254 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:36:33.216175 ignition[701]: op(1): [finished] loading QEMU firmware config module Jul 6 23:36:33.256234 ignition[701]: parsing config with SHA512: cb08bb27db13316b4ad49b7efafb4087a0b95ea4211755397c2a5e64ef63fa7681f580eb5b4cdc6e3fff51fc46200bcf7b409ab5180a630067119e8f707047a2 Jul 6 23:36:33.260503 unknown[701]: fetched base config from "system" Jul 6 23:36:33.260515 unknown[701]: fetched user config from "qemu" Jul 6 23:36:33.261102 ignition[701]: fetch-offline: fetch-offline passed Jul 6 23:36:33.261258 systemd-resolved[287]: Detected conflict on linux IN A 10.0.0.91 Jul 6 23:36:33.261164 ignition[701]: Ignition finished successfully Jul 6 23:36:33.261266 systemd-resolved[287]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jul 6 23:36:33.263370 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:36:33.264727 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:36:33.265608 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:36:33.311537 ignition[810]: Ignition 2.21.0 Jul 6 23:36:33.311554 ignition[810]: Stage: kargs Jul 6 23:36:33.311940 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:36:33.311951 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:36:33.314705 ignition[810]: kargs: kargs passed Jul 6 23:36:33.314753 ignition[810]: Ignition finished successfully Jul 6 23:36:33.317232 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:36:33.319382 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:36:33.347135 ignition[818]: Ignition 2.21.0 Jul 6 23:36:33.347151 ignition[818]: Stage: disks Jul 6 23:36:33.347291 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:36:33.347301 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:36:33.350142 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:36:33.348538 ignition[818]: disks: disks passed Jul 6 23:36:33.351761 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:36:33.348594 ignition[818]: Ignition finished successfully Jul 6 23:36:33.353469 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:36:33.355055 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:36:33.356870 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:36:33.358441 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:36:33.361260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:36:33.393403 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:36:33.398498 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:36:33.400773 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:36:33.475921 kernel: EXT4-fs (vda9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:36:33.476293 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:36:33.477584 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:36:33.480085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:36:33.481746 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:36:33.482799 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:36:33.482842 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:36:33.482867 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:36:33.501472 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:36:33.504090 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:36:33.510488 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (836) Jul 6 23:36:33.510510 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:36:33.510520 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:36:33.510530 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:36:33.513427 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:36:33.568273 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:36:33.572272 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:36:33.576223 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:36:33.580157 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:36:33.661390 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:36:33.663425 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:36:33.665068 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:36:33.684929 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:36:33.702558 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:36:33.710642 ignition[949]: INFO : Ignition 2.21.0 Jul 6 23:36:33.710642 ignition[949]: INFO : Stage: mount Jul 6 23:36:33.712339 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:36:33.712339 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:36:33.712339 ignition[949]: INFO : mount: mount passed Jul 6 23:36:33.712339 ignition[949]: INFO : Ignition finished successfully Jul 6 23:36:33.713506 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:36:33.716356 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:36:33.937524 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:36:33.939062 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:36:33.972539 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (961) Jul 6 23:36:33.972597 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:36:33.972616 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:36:33.973556 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:36:33.977036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:36:34.001889 ignition[978]: INFO : Ignition 2.21.0 Jul 6 23:36:34.001889 ignition[978]: INFO : Stage: files Jul 6 23:36:34.003603 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:36:34.003603 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:36:34.006214 ignition[978]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:36:34.007698 ignition[978]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:36:34.007698 ignition[978]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:36:34.011674 ignition[978]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:36:34.013490 ignition[978]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:36:34.013490 ignition[978]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:36:34.012321 unknown[978]: wrote ssh authorized keys file for user: core Jul 6 23:36:34.018029 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:36:34.018029 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 6 23:36:34.053162 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:36:34.166228 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:36:34.166228 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:36:34.170572 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:36:34.189317 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:36:34.189317 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:36:34.189317 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 6 23:36:34.688118 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:36:35.019183 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:36:35.019183 ignition[978]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:36:35.023515 ignition[978]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:36:35.026022 ignition[978]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:36:35.026022 ignition[978]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:36:35.026022 ignition[978]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:36:35.031596 ignition[978]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:36:35.031596 ignition[978]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:36:35.031596 ignition[978]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:36:35.031596 ignition[978]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:36:35.051982 ignition[978]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:36:35.055972 ignition[978]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:36:35.055972 ignition[978]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:36:35.055972 ignition[978]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:36:35.055972 ignition[978]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:36:35.055972 ignition[978]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:36:35.055972 ignition[978]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:36:35.055972 ignition[978]: INFO : files: files passed Jul 6 23:36:35.055972 ignition[978]: INFO : Ignition finished successfully Jul 6 23:36:35.059638 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:36:35.063814 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:36:35.069243 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:36:35.091138 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:36:35.091241 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:36:35.093205 systemd-networkd[797]: eth0: Gained IPv6LL Jul 6 23:36:35.096734 initrd-setup-root-after-ignition[1007]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:36:35.098265 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:36:35.098265 initrd-setup-root-after-ignition[1009]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:36:35.103431 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:36:35.106327 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:36:35.107790 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:36:35.112051 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:36:35.164605 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:36:35.164744 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:36:35.167435 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:36:35.169622 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:36:35.171654 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:36:35.173088 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:36:35.208548 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:36:35.210849 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:36:35.236685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:36:35.238083 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:36:35.240200 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:36:35.242106 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:36:35.242235 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:36:35.245088 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:36:35.247324 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:36:35.249048 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:36:35.250857 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:36:35.252979 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:36:35.255124 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:36:35.257217 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:36:35.259526 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:36:35.262552 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:36:35.264778 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:36:35.266756 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:36:35.268411 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:36:35.268554 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:36:35.271034 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:36:35.273746 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:36:35.275895 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:36:35.277025 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:36:35.278354 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:36:35.278489 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:36:35.281844 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:36:35.281972 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:36:35.283336 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:36:35.285081 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:36:35.286006 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:36:35.287341 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:36:35.289197 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:36:35.291261 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:36:35.291357 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:36:35.293132 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:36:35.293211 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:36:35.295092 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:36:35.295215 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:36:35.297680 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:36:35.297796 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:36:35.300418 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:36:35.302119 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:36:35.302249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:36:35.325586 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:36:35.326517 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:36:35.326663 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:36:35.328630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:36:35.328736 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:36:35.335157 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:36:35.337313 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:36:35.341361 ignition[1034]: INFO : Ignition 2.21.0 Jul 6 23:36:35.341361 ignition[1034]: INFO : Stage: umount Jul 6 23:36:35.343159 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:36:35.343159 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:36:35.343159 ignition[1034]: INFO : umount: umount passed Jul 6 23:36:35.343159 ignition[1034]: INFO : Ignition finished successfully Jul 6 23:36:35.342852 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:36:35.346621 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:36:35.346711 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:36:35.348898 systemd[1]: Stopped target network.target - Network. Jul 6 23:36:35.350420 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:36:35.350490 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:36:35.352492 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:36:35.352540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:36:35.354344 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:36:35.354393 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:36:35.356126 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:36:35.356167 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:36:35.358113 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:36:35.360025 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:36:35.367676 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:36:35.367787 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:36:35.371485 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:36:35.371678 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:36:35.371766 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:36:35.376157 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:36:35.376789 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:36:35.378741 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:36:35.378796 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:36:35.382368 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:36:35.383275 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:36:35.383341 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:36:35.386116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:36:35.386163 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:36:35.391931 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:36:35.391994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:36:35.393513 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:36:35.393583 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:36:35.396927 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:36:35.401214 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:36:35.401276 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:36:35.401598 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:36:35.401709 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:36:35.404874 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:36:35.404984 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:36:35.416789 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:36:35.416933 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:36:35.421654 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:36:35.421816 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:36:35.424215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:36:35.424262 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:36:35.426233 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:36:35.426274 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:36:35.428163 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:36:35.428232 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:36:35.431048 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:36:35.431095 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:36:35.433724 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:36:35.433774 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:36:35.436968 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:36:35.438278 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:36:35.438341 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:36:35.441432 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:36:35.441476 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:36:35.444791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:36:35.444834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:36:35.449482 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:36:35.449530 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:36:35.449564 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:36:35.455192 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:36:35.456944 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:36:35.459075 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:36:35.461704 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:36:35.484852 systemd[1]: Switching root. Jul 6 23:36:35.515189 systemd-journald[244]: Journal stopped Jul 6 23:36:36.406194 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 6 23:36:36.406255 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:36:36.406268 kernel: SELinux: policy capability open_perms=1 Jul 6 23:36:36.406277 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:36:36.406286 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:36:36.406295 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:36:36.406315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:36:36.406324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:36:36.406336 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:36:36.406347 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:36:36.406356 kernel: audit: type=1403 audit(1751844995.688:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:36:36.406369 systemd[1]: Successfully loaded SELinux policy in 47.697ms. Jul 6 23:36:36.406388 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.678ms. Jul 6 23:36:36.406399 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:36:36.406410 systemd[1]: Detected virtualization kvm. Jul 6 23:36:36.406419 systemd[1]: Detected architecture arm64. Jul 6 23:36:36.406430 systemd[1]: Detected first boot. Jul 6 23:36:36.406441 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:36:36.406451 zram_generator::config[1080]: No configuration found. Jul 6 23:36:36.406461 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:36:36.406473 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:36:36.406484 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:36:36.406495 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:36:36.406505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:36:36.406514 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:36:36.406525 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:36:36.406535 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:36:36.406545 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:36:36.406555 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:36:36.406565 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:36:36.406575 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:36:36.406586 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:36:36.406596 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:36:36.406607 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:36:36.406618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:36:36.406628 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:36:36.406638 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:36:36.406649 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:36:36.406659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:36:36.406669 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:36:36.406683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:36:36.406696 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:36:36.406706 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:36:36.406716 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:36:36.406726 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:36:36.406744 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:36:36.406754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:36:36.406764 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:36:36.406774 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:36:36.406783 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:36:36.406794 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:36:36.406805 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:36:36.406815 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:36:36.406825 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:36:36.406836 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:36:36.406846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:36:36.406855 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:36:36.406866 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:36:36.406876 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:36:36.406886 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:36:36.406898 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:36:36.406925 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:36:36.406937 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:36:36.406947 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:36:36.406958 systemd[1]: Reached target machines.target - Containers. Jul 6 23:36:36.406967 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:36:36.406978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:36:36.406994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:36:36.407009 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:36:36.407019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:36:36.407029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:36:36.407039 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:36:36.407049 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:36:36.407059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:36:36.407069 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:36:36.407080 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:36:36.407094 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:36:36.407104 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:36:36.407114 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:36:36.407125 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:36:36.407135 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:36:36.407144 kernel: fuse: init (API version 7.41) Jul 6 23:36:36.407154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:36:36.407164 kernel: loop: module loaded Jul 6 23:36:36.407175 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:36:36.407186 kernel: ACPI: bus type drm_connector registered Jul 6 23:36:36.407196 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:36:36.407206 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:36:36.407216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:36:36.407227 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:36:36.407238 systemd[1]: Stopped verity-setup.service. Jul 6 23:36:36.407248 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:36:36.407258 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:36:36.407268 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:36:36.407279 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:36:36.407289 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:36:36.407321 systemd-journald[1152]: Collecting audit messages is disabled. Jul 6 23:36:36.407343 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:36:36.407354 systemd-journald[1152]: Journal started Jul 6 23:36:36.407374 systemd-journald[1152]: Runtime Journal (/run/log/journal/fbf1543d23c14d5c873b9fc87d2364e9) is 6M, max 48.5M, 42.4M free. Jul 6 23:36:36.158195 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:36:36.177049 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:36:36.177468 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:36:36.414254 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:36:36.416324 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:36:36.417232 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:36:36.418804 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:36:36.419005 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:36:36.420624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:36:36.420806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:36:36.422196 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:36:36.422364 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:36:36.423656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:36:36.423811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:36:36.425276 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:36:36.425442 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:36:36.426744 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:36:36.426928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:36:36.428283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:36:36.429809 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:36:36.431314 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:36:36.433959 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:36:36.446705 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:36:36.449358 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:36:36.451455 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:36:36.452660 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:36:36.452700 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:36:36.454631 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:36:36.461481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:36:36.462625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:36:36.463958 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:36:36.465949 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:36:36.467231 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:36:36.468081 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:36:36.469200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:36:36.475369 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:36:36.480443 systemd-journald[1152]: Time spent on flushing to /var/log/journal/fbf1543d23c14d5c873b9fc87d2364e9 is 14.954ms for 882 entries. Jul 6 23:36:36.480443 systemd-journald[1152]: System Journal (/var/log/journal/fbf1543d23c14d5c873b9fc87d2364e9) is 8M, max 195.6M, 187.6M free. Jul 6 23:36:36.501545 systemd-journald[1152]: Received client request to flush runtime journal. Jul 6 23:36:36.479130 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:36:36.481858 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:36:36.488226 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:36:36.490623 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:36:36.492103 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:36:36.494943 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:36:36.501043 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:36:36.505300 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:36:36.507503 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:36:36.513535 kernel: loop0: detected capacity change from 0 to 138376 Jul 6 23:36:36.527768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:36:36.538718 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:36:36.545425 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:36:36.551447 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:36:36.558090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:36:36.560193 kernel: loop1: detected capacity change from 0 to 107312 Jul 6 23:36:36.582935 kernel: loop2: detected capacity change from 0 to 203944 Jul 6 23:36:36.584998 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jul 6 23:36:36.585017 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jul 6 23:36:36.591954 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:36:36.635971 kernel: loop3: detected capacity change from 0 to 138376 Jul 6 23:36:36.643412 kernel: loop4: detected capacity change from 0 to 107312 Jul 6 23:36:36.650391 kernel: loop5: detected capacity change from 0 to 203944 Jul 6 23:36:36.654283 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:36:36.654664 (sd-merge)[1219]: Merged extensions into '/usr'. Jul 6 23:36:36.658296 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:36:36.658319 systemd[1]: Reloading... Jul 6 23:36:36.720081 zram_generator::config[1245]: No configuration found. Jul 6 23:36:36.784138 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:36:36.809340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:36:36.874720 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:36:36.875144 systemd[1]: Reloading finished in 216 ms. Jul 6 23:36:36.906948 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:36:36.908584 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:36:36.930275 systemd[1]: Starting ensure-sysext.service... Jul 6 23:36:36.932308 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:36:36.944658 systemd[1]: Reload requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:36:36.944675 systemd[1]: Reloading... Jul 6 23:36:36.950703 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:36:36.950745 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:36:36.951019 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:36:36.951212 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:36:36.951843 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:36:36.952095 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jul 6 23:36:36.952148 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jul 6 23:36:36.954613 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:36:36.954629 systemd-tmpfiles[1280]: Skipping /boot Jul 6 23:36:36.963693 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:36:36.963712 systemd-tmpfiles[1280]: Skipping /boot Jul 6 23:36:36.991980 zram_generator::config[1310]: No configuration found. Jul 6 23:36:37.057464 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:36:37.124015 systemd[1]: Reloading finished in 179 ms. Jul 6 23:36:37.146544 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:36:37.148926 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:36:37.163181 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:36:37.166204 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:36:37.170523 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:36:37.173548 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:36:37.176287 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:36:37.180715 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:36:37.188630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:36:37.201380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:36:37.205383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:36:37.207872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:36:37.211073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:36:37.211206 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:36:37.212293 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:36:37.214410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:36:37.214556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:36:37.216409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:36:37.216561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:36:37.220650 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:36:37.224582 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:36:37.224802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:36:37.227836 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:36:37.228224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:36:37.230150 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:36:37.240183 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:36:37.241303 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:36:37.242475 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:36:37.247428 augenrules[1379]: No rules Jul 6 23:36:37.247545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:36:37.250015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:36:37.252668 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Jul 6 23:36:37.253129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:36:37.262160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:36:37.263505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:36:37.263641 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:36:37.263737 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:36:37.264831 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:36:37.265046 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:36:37.266975 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:36:37.268687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:36:37.268855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:36:37.270624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:36:37.270766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:36:37.272868 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:36:37.273064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:36:37.274607 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:36:37.287276 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:36:37.288560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:36:37.290187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:36:37.294176 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:36:37.301095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:36:37.313346 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:36:37.314885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:36:37.315051 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:36:37.317949 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:36:37.319230 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:36:37.320607 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:36:37.323212 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:36:37.325654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:36:37.326121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:36:37.330450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:36:37.330642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:36:37.333515 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:36:37.335007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:36:37.336739 augenrules[1413]: /sbin/augenrules: No change Jul 6 23:36:37.347935 systemd[1]: Finished ensure-sysext.service. Jul 6 23:36:37.349239 augenrules[1446]: No rules Jul 6 23:36:37.350312 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:36:37.353276 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:36:37.354924 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:36:37.361849 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:36:37.366388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:36:37.366439 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:36:37.369074 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:36:37.425029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:36:37.427593 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:36:37.458967 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:36:37.499432 systemd-networkd[1428]: lo: Link UP Jul 6 23:36:37.499440 systemd-networkd[1428]: lo: Gained carrier Jul 6 23:36:37.500297 systemd-networkd[1428]: Enumeration completed Jul 6 23:36:37.500437 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:36:37.500925 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:36:37.500929 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:36:37.507641 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:36:37.513229 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:36:37.515400 systemd-networkd[1428]: eth0: Link UP Jul 6 23:36:37.515509 systemd-networkd[1428]: eth0: Gained carrier Jul 6 23:36:37.515528 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:36:37.520001 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:36:37.522179 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:36:37.526830 systemd-resolved[1347]: Positive Trust Anchors: Jul 6 23:36:37.527279 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:36:37.527380 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:36:37.542007 systemd-resolved[1347]: Defaulting to hostname 'linux'. Jul 6 23:36:37.543894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:36:37.545840 systemd[1]: Reached target network.target - Network. Jul 6 23:36:37.546825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:36:37.548149 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:36:37.551128 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:36:37.552514 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:36:37.554128 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:36:37.555347 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:36:37.556721 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:36:37.558098 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:36:37.558138 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:36:37.559059 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:36:37.560889 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:36:37.563040 systemd-networkd[1428]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:36:37.563953 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:36:37.568409 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:36:37.569718 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Jul 6 23:36:37.570507 systemd-timesyncd[1462]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:36:37.570559 systemd-timesyncd[1462]: Initial clock synchronization to Sun 2025-07-06 23:36:37.767973 UTC. Jul 6 23:36:37.571308 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:36:37.572725 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:36:37.578631 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:36:37.580327 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:36:37.582559 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:36:37.584086 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:36:37.593181 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:36:37.594314 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:36:37.595395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:36:37.595498 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:36:37.596821 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:36:37.598969 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:36:37.600901 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:36:37.613149 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:36:37.615315 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:36:37.616472 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:36:37.617659 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:36:37.621077 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:36:37.623228 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:36:37.623522 jq[1494]: false Jul 6 23:36:37.627150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:36:37.631334 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:36:37.633104 extend-filesystems[1495]: Found /dev/vda6 Jul 6 23:36:37.634288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:36:37.636397 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:36:37.636890 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:36:37.638313 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:36:37.641340 extend-filesystems[1495]: Found /dev/vda9 Jul 6 23:36:37.641397 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:36:37.646641 extend-filesystems[1495]: Checking size of /dev/vda9 Jul 6 23:36:37.648229 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:36:37.650789 jq[1512]: true Jul 6 23:36:37.652899 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:36:37.653128 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:36:37.654864 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:36:37.655271 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:36:37.673639 extend-filesystems[1495]: Resized partition /dev/vda9 Jul 6 23:36:37.674165 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:36:37.686228 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:36:37.688109 (ntainerd)[1520]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:36:37.709234 jq[1519]: true Jul 6 23:36:37.716529 tar[1518]: linux-arm64/helm Jul 6 23:36:37.726868 extend-filesystems[1534]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:36:37.727054 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:36:37.727373 systemd-logind[1503]: New seat seat0. Jul 6 23:36:37.727943 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:36:37.742951 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:36:37.757855 update_engine[1508]: I20250706 23:36:37.757687 1508 main.cc:92] Flatcar Update Engine starting Jul 6 23:36:37.773355 dbus-daemon[1492]: [system] SELinux support is enabled Jul 6 23:36:37.773868 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:36:37.777846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:36:37.777891 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:36:37.779583 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:36:37.779613 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:36:37.783811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:36:37.788997 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:36:37.794942 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:36:37.797090 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:36:37.812043 update_engine[1508]: I20250706 23:36:37.797133 1508 update_check_scheduler.cc:74] Next update check in 6m19s Jul 6 23:36:37.802492 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:36:37.815285 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:36:37.815285 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:36:37.815285 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:36:37.821189 extend-filesystems[1495]: Resized filesystem in /dev/vda9 Jul 6 23:36:37.816124 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:36:37.817972 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:36:37.846951 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:36:37.848352 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:36:37.851222 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:36:37.878361 locksmithd[1558]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:36:38.002976 containerd[1520]: time="2025-07-06T23:36:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:36:38.003510 containerd[1520]: time="2025-07-06T23:36:38.003468917Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:36:38.020916 containerd[1520]: time="2025-07-06T23:36:38.020862324Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.001µs" Jul 6 23:36:38.020916 containerd[1520]: time="2025-07-06T23:36:38.020906345Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:36:38.021050 containerd[1520]: time="2025-07-06T23:36:38.020938111Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:36:38.021146 containerd[1520]: time="2025-07-06T23:36:38.021121735Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:36:38.021250 containerd[1520]: time="2025-07-06T23:36:38.021146615Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:36:38.021250 containerd[1520]: time="2025-07-06T23:36:38.021175716Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021250 containerd[1520]: time="2025-07-06T23:36:38.021230681Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021250 containerd[1520]: time="2025-07-06T23:36:38.021243141Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021515 containerd[1520]: time="2025-07-06T23:36:38.021488944Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021515 containerd[1520]: time="2025-07-06T23:36:38.021511856Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021575 containerd[1520]: time="2025-07-06T23:36:38.021525382Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021575 containerd[1520]: time="2025-07-06T23:36:38.021533620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021657 containerd[1520]: time="2025-07-06T23:36:38.021606742Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021830 containerd[1520]: time="2025-07-06T23:36:38.021809836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021861 containerd[1520]: time="2025-07-06T23:36:38.021852218Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:36:38.021887 containerd[1520]: time="2025-07-06T23:36:38.021862588Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:36:38.021919 containerd[1520]: time="2025-07-06T23:36:38.021891115Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:36:38.022163 containerd[1520]: time="2025-07-06T23:36:38.022143681Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:36:38.022232 containerd[1520]: time="2025-07-06T23:36:38.022214795Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:36:38.025466 containerd[1520]: time="2025-07-06T23:36:38.025431303Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:36:38.025533 containerd[1520]: time="2025-07-06T23:36:38.025481144Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:36:38.025533 containerd[1520]: time="2025-07-06T23:36:38.025498563Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:36:38.025533 containerd[1520]: time="2025-07-06T23:36:38.025512417Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:36:38.025533 containerd[1520]: time="2025-07-06T23:36:38.025525656Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:36:38.025637 containerd[1520]: time="2025-07-06T23:36:38.025546478Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:36:38.025637 containerd[1520]: time="2025-07-06T23:36:38.025558610Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:36:38.025637 containerd[1520]: time="2025-07-06T23:36:38.025570456Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:36:38.025637 containerd[1520]: time="2025-07-06T23:36:38.025593327Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:36:38.025637 containerd[1520]: time="2025-07-06T23:36:38.025606730Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:36:38.025637 containerd[1520]: time="2025-07-06T23:36:38.025618534Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:36:38.025637 containerd[1520]: time="2025-07-06T23:36:38.025637307Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025748793Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025776993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025794576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025806094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025816300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025827080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025837736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025848352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025860894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025872453Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:36:38.026016 containerd[1520]: time="2025-07-06T23:36:38.025883438Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:36:38.026286 containerd[1520]: time="2025-07-06T23:36:38.026134569Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:36:38.026286 containerd[1520]: time="2025-07-06T23:36:38.026151907Z" level=info msg="Start snapshots syncer" Jul 6 23:36:38.026286 containerd[1520]: time="2025-07-06T23:36:38.026176745Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:36:38.026551 containerd[1520]: time="2025-07-06T23:36:38.026512025Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:36:38.026789 containerd[1520]: time="2025-07-06T23:36:38.026569694Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:36:38.026789 containerd[1520]: time="2025-07-06T23:36:38.026650071Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:36:38.026967 containerd[1520]: time="2025-07-06T23:36:38.026941043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:36:38.026997 containerd[1520]: time="2025-07-06T23:36:38.026974448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:36:38.026997 containerd[1520]: time="2025-07-06T23:36:38.026987564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:36:38.027045 containerd[1520]: time="2025-07-06T23:36:38.027000434Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:36:38.027045 containerd[1520]: time="2025-07-06T23:36:38.027011664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:36:38.027045 containerd[1520]: time="2025-07-06T23:36:38.027021747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:36:38.027045 containerd[1520]: time="2025-07-06T23:36:38.027031502Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:36:38.027113 containerd[1520]: time="2025-07-06T23:36:38.027074294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:36:38.027113 containerd[1520]: time="2025-07-06T23:36:38.027087451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:36:38.027113 containerd[1520]: time="2025-07-06T23:36:38.027098107Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:36:38.027167 containerd[1520]: time="2025-07-06T23:36:38.027151637Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:36:38.027186 containerd[1520]: time="2025-07-06T23:36:38.027165860Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:36:38.027186 containerd[1520]: time="2025-07-06T23:36:38.027174631Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:36:38.027220 containerd[1520]: time="2025-07-06T23:36:38.027184059Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:36:38.028363 containerd[1520]: time="2025-07-06T23:36:38.027191764Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:36:38.028418 containerd[1520]: time="2025-07-06T23:36:38.028392990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:36:38.028451 containerd[1520]: time="2025-07-06T23:36:38.028420370Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:36:38.028620 containerd[1520]: time="2025-07-06T23:36:38.028502345Z" level=info msg="runtime interface created" Jul 6 23:36:38.028620 containerd[1520]: time="2025-07-06T23:36:38.028515215Z" level=info msg="created NRI interface" Jul 6 23:36:38.028620 containerd[1520]: time="2025-07-06T23:36:38.028557515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:36:38.028843 containerd[1520]: time="2025-07-06T23:36:38.028603954Z" level=info msg="Connect containerd service" Jul 6 23:36:38.028880 containerd[1520]: time="2025-07-06T23:36:38.028854142Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:36:38.030014 containerd[1520]: time="2025-07-06T23:36:38.029978393Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:36:38.144978 tar[1518]: linux-arm64/LICENSE Jul 6 23:36:38.145069 tar[1518]: linux-arm64/README.md Jul 6 23:36:38.150816 containerd[1520]: time="2025-07-06T23:36:38.150743458Z" level=info msg="Start subscribing containerd event" Jul 6 23:36:38.150816 containerd[1520]: time="2025-07-06T23:36:38.150818384Z" level=info msg="Start recovering state" Jul 6 23:36:38.150923 containerd[1520]: time="2025-07-06T23:36:38.150897162Z" level=info msg="Start event monitor" Jul 6 23:36:38.150923 containerd[1520]: time="2025-07-06T23:36:38.150914705Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:36:38.150923 containerd[1520]: time="2025-07-06T23:36:38.150923804Z" level=info msg="Start streaming server" Jul 6 23:36:38.150923 containerd[1520]: time="2025-07-06T23:36:38.150972416Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:36:38.151093 containerd[1520]: time="2025-07-06T23:36:38.150980777Z" level=info msg="runtime interface starting up..." Jul 6 23:36:38.151093 containerd[1520]: time="2025-07-06T23:36:38.150987212Z" level=info msg="starting plugins..." Jul 6 23:36:38.151093 containerd[1520]: time="2025-07-06T23:36:38.151002255Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:36:38.151267 containerd[1520]: time="2025-07-06T23:36:38.151235352Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:36:38.151404 containerd[1520]: time="2025-07-06T23:36:38.151380612Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:36:38.151725 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:36:38.153485 containerd[1520]: time="2025-07-06T23:36:38.151605675Z" level=info msg="containerd successfully booted in 0.149465s" Jul 6 23:36:38.161154 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:36:38.928630 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:36:38.947838 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:36:38.950761 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:36:38.972584 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:36:38.972827 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:36:38.976908 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:36:39.003990 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:36:39.007555 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:36:39.009996 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:36:39.011481 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:36:39.379233 systemd-networkd[1428]: eth0: Gained IPv6LL Jul 6 23:36:39.381896 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:36:39.384443 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:36:39.387359 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:36:39.390015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:36:39.411211 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:36:39.428464 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:36:39.428845 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:36:39.431010 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:36:39.443132 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:36:39.989185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:36:39.990899 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:36:39.992480 systemd[1]: Startup finished in 2.171s (kernel) + 5.060s (initrd) + 4.356s (userspace) = 11.588s. Jul 6 23:36:39.993323 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:36:40.676539 kubelet[1632]: E0706 23:36:40.676481 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:36:40.678842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:36:40.678994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:36:40.681007 systemd[1]: kubelet.service: Consumed 992ms CPU time, 262.6M memory peak. Jul 6 23:36:43.837659 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:36:43.838889 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:57656.service - OpenSSH per-connection server daemon (10.0.0.1:57656). Jul 6 23:36:43.918302 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 57656 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:36:43.920277 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:43.933137 systemd-logind[1503]: New session 1 of user core. Jul 6 23:36:43.933344 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:36:43.934406 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:36:43.957457 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:36:43.959965 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:36:43.993241 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:36:43.995692 systemd-logind[1503]: New session c1 of user core. Jul 6 23:36:44.115533 systemd[1649]: Queued start job for default target default.target. Jul 6 23:36:44.130978 systemd[1649]: Created slice app.slice - User Application Slice. Jul 6 23:36:44.131005 systemd[1649]: Reached target paths.target - Paths. Jul 6 23:36:44.131043 systemd[1649]: Reached target timers.target - Timers. Jul 6 23:36:44.132333 systemd[1649]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:36:44.141416 systemd[1649]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:36:44.141487 systemd[1649]: Reached target sockets.target - Sockets. Jul 6 23:36:44.141526 systemd[1649]: Reached target basic.target - Basic System. Jul 6 23:36:44.141554 systemd[1649]: Reached target default.target - Main User Target. Jul 6 23:36:44.141580 systemd[1649]: Startup finished in 139ms. Jul 6 23:36:44.141773 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:36:44.143194 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:36:44.203328 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:57670.service - OpenSSH per-connection server daemon (10.0.0.1:57670). Jul 6 23:36:44.255843 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 57670 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:36:44.257360 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:44.261970 systemd-logind[1503]: New session 2 of user core. Jul 6 23:36:44.273113 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:36:44.324971 sshd[1662]: Connection closed by 10.0.0.1 port 57670 Jul 6 23:36:44.325441 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:44.343130 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:57670.service: Deactivated successfully. Jul 6 23:36:44.346245 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:36:44.346992 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:36:44.349611 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:57682.service - OpenSSH per-connection server daemon (10.0.0.1:57682). Jul 6 23:36:44.350297 systemd-logind[1503]: Removed session 2. Jul 6 23:36:44.406759 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 57682 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:36:44.407293 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:44.411895 systemd-logind[1503]: New session 3 of user core. Jul 6 23:36:44.429100 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:36:44.477981 sshd[1670]: Connection closed by 10.0.0.1 port 57682 Jul 6 23:36:44.477952 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:44.495364 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:57682.service: Deactivated successfully. Jul 6 23:36:44.499116 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:36:44.499795 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:36:44.502249 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:57684.service - OpenSSH per-connection server daemon (10.0.0.1:57684). Jul 6 23:36:44.502693 systemd-logind[1503]: Removed session 3. Jul 6 23:36:44.554545 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 57684 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:36:44.555853 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:44.560808 systemd-logind[1503]: New session 4 of user core. Jul 6 23:36:44.578099 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:36:44.631567 sshd[1678]: Connection closed by 10.0.0.1 port 57684 Jul 6 23:36:44.631419 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:44.641079 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:57684.service: Deactivated successfully. Jul 6 23:36:44.643534 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:36:44.644253 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:36:44.646936 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:57696.service - OpenSSH per-connection server daemon (10.0.0.1:57696). Jul 6 23:36:44.647614 systemd-logind[1503]: Removed session 4. Jul 6 23:36:44.706118 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 57696 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:36:44.707466 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:44.711989 systemd-logind[1503]: New session 5 of user core. Jul 6 23:36:44.719091 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:36:44.790412 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:36:44.790700 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:36:44.805690 sudo[1687]: pam_unix(sudo:session): session closed for user root Jul 6 23:36:44.807769 sshd[1686]: Connection closed by 10.0.0.1 port 57696 Jul 6 23:36:44.808975 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:44.817657 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:57696.service: Deactivated successfully. Jul 6 23:36:44.820440 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:36:44.823106 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:36:44.826830 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:57706.service - OpenSSH per-connection server daemon (10.0.0.1:57706). Jul 6 23:36:44.827557 systemd-logind[1503]: Removed session 5. Jul 6 23:36:44.883409 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 57706 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:36:44.884989 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:44.889119 systemd-logind[1503]: New session 6 of user core. Jul 6 23:36:44.909117 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:36:44.962886 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:36:44.963199 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:36:45.032876 sudo[1697]: pam_unix(sudo:session): session closed for user root Jul 6 23:36:45.038543 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:36:45.038806 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:36:45.049234 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:36:45.096873 augenrules[1719]: No rules Jul 6 23:36:45.098341 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:36:45.098597 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:36:45.100105 sudo[1696]: pam_unix(sudo:session): session closed for user root Jul 6 23:36:45.101819 sshd[1695]: Connection closed by 10.0.0.1 port 57706 Jul 6 23:36:45.101671 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:45.112402 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:57706.service: Deactivated successfully. Jul 6 23:36:45.115487 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:36:45.116264 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:36:45.119008 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:57708.service - OpenSSH per-connection server daemon (10.0.0.1:57708). Jul 6 23:36:45.119473 systemd-logind[1503]: Removed session 6. Jul 6 23:36:45.170032 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 57708 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:36:45.171326 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:36:45.175254 systemd-logind[1503]: New session 7 of user core. Jul 6 23:36:45.186094 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:36:45.238329 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:36:45.239251 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:36:45.672424 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:36:45.690272 (dockerd)[1751]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:36:46.004367 dockerd[1751]: time="2025-07-06T23:36:46.004312858Z" level=info msg="Starting up" Jul 6 23:36:46.005139 dockerd[1751]: time="2025-07-06T23:36:46.005115249Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:36:46.047762 dockerd[1751]: time="2025-07-06T23:36:46.047632062Z" level=info msg="Loading containers: start." Jul 6 23:36:46.055939 kernel: Initializing XFRM netlink socket Jul 6 23:36:46.251793 systemd-networkd[1428]: docker0: Link UP Jul 6 23:36:46.255221 dockerd[1751]: time="2025-07-06T23:36:46.255130381Z" level=info msg="Loading containers: done." Jul 6 23:36:46.268864 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck5304512-merged.mount: Deactivated successfully. Jul 6 23:36:46.271619 dockerd[1751]: time="2025-07-06T23:36:46.271564239Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:36:46.271699 dockerd[1751]: time="2025-07-06T23:36:46.271643022Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:36:46.271778 dockerd[1751]: time="2025-07-06T23:36:46.271746977Z" level=info msg="Initializing buildkit" Jul 6 23:36:46.291743 dockerd[1751]: time="2025-07-06T23:36:46.291698307Z" level=info msg="Completed buildkit initialization" Jul 6 23:36:46.296460 dockerd[1751]: time="2025-07-06T23:36:46.296422330Z" level=info msg="Daemon has completed initialization" Jul 6 23:36:46.296510 dockerd[1751]: time="2025-07-06T23:36:46.296463557Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:36:46.296619 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:36:46.845983 containerd[1520]: time="2025-07-06T23:36:46.845943521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:36:47.347997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550262080.mount: Deactivated successfully. Jul 6 23:36:48.276363 containerd[1520]: time="2025-07-06T23:36:48.276293469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:48.276820 containerd[1520]: time="2025-07-06T23:36:48.276787699Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 6 23:36:48.277922 containerd[1520]: time="2025-07-06T23:36:48.277869199Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:48.280718 containerd[1520]: time="2025-07-06T23:36:48.280667257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:48.281269 containerd[1520]: time="2025-07-06T23:36:48.281239712Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.435253649s" Jul 6 23:36:48.281325 containerd[1520]: time="2025-07-06T23:36:48.281272967Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 6 23:36:48.286036 containerd[1520]: time="2025-07-06T23:36:48.286003980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:36:49.334135 containerd[1520]: time="2025-07-06T23:36:49.334091649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:49.334836 containerd[1520]: time="2025-07-06T23:36:49.334796151Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 6 23:36:49.335545 containerd[1520]: time="2025-07-06T23:36:49.335492285Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:49.337808 containerd[1520]: time="2025-07-06T23:36:49.337779492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:49.339245 containerd[1520]: time="2025-07-06T23:36:49.339186282Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.05313919s" Jul 6 23:36:49.339245 containerd[1520]: time="2025-07-06T23:36:49.339225383Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 6 23:36:49.340209 containerd[1520]: time="2025-07-06T23:36:49.340158940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:36:50.335995 containerd[1520]: time="2025-07-06T23:36:50.335945520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:50.337020 containerd[1520]: time="2025-07-06T23:36:50.336985627Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 6 23:36:50.337840 containerd[1520]: time="2025-07-06T23:36:50.337804399Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:50.340863 containerd[1520]: time="2025-07-06T23:36:50.340831057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:50.342006 containerd[1520]: time="2025-07-06T23:36:50.341977812Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.001772542s" Jul 6 23:36:50.342142 containerd[1520]: time="2025-07-06T23:36:50.342093947Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 6 23:36:50.342627 containerd[1520]: time="2025-07-06T23:36:50.342603428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:36:50.745652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:36:50.747116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:36:50.886434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:36:50.890531 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:36:50.937307 kubelet[2027]: E0706 23:36:50.937249 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:36:50.940077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:36:50.940311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:36:50.942114 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.4M memory peak. Jul 6 23:36:51.365750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961592508.mount: Deactivated successfully. Jul 6 23:36:51.725953 containerd[1520]: time="2025-07-06T23:36:51.725809509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:51.728071 containerd[1520]: time="2025-07-06T23:36:51.728029849Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 6 23:36:51.730841 containerd[1520]: time="2025-07-06T23:36:51.730809573Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:51.734719 containerd[1520]: time="2025-07-06T23:36:51.734681228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:51.735771 containerd[1520]: time="2025-07-06T23:36:51.735730013Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.393099375s" Jul 6 23:36:51.735771 containerd[1520]: time="2025-07-06T23:36:51.735768700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 6 23:36:51.736410 containerd[1520]: time="2025-07-06T23:36:51.736202138Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:36:52.243105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340926283.mount: Deactivated successfully. Jul 6 23:36:52.870629 containerd[1520]: time="2025-07-06T23:36:52.870572124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:52.874465 containerd[1520]: time="2025-07-06T23:36:52.874431925Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 6 23:36:52.875512 containerd[1520]: time="2025-07-06T23:36:52.875481186Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:52.880232 containerd[1520]: time="2025-07-06T23:36:52.880190730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:52.881558 containerd[1520]: time="2025-07-06T23:36:52.881520454Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.14529072s" Jul 6 23:36:52.881558 containerd[1520]: time="2025-07-06T23:36:52.881555307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:36:52.882041 containerd[1520]: time="2025-07-06T23:36:52.882019226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:36:53.377252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182116091.mount: Deactivated successfully. Jul 6 23:36:53.382224 containerd[1520]: time="2025-07-06T23:36:53.382167503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:36:53.383202 containerd[1520]: time="2025-07-06T23:36:53.383171396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 6 23:36:53.384110 containerd[1520]: time="2025-07-06T23:36:53.384058422Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:36:53.385740 containerd[1520]: time="2025-07-06T23:36:53.385711311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:36:53.386558 containerd[1520]: time="2025-07-06T23:36:53.386488692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 504.439756ms" Jul 6 23:36:53.386558 containerd[1520]: time="2025-07-06T23:36:53.386518070Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:36:53.387354 containerd[1520]: time="2025-07-06T23:36:53.387116216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:36:53.954122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632125491.mount: Deactivated successfully. Jul 6 23:36:55.307524 containerd[1520]: time="2025-07-06T23:36:55.307467719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:55.308228 containerd[1520]: time="2025-07-06T23:36:55.308190118Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 6 23:36:55.309185 containerd[1520]: time="2025-07-06T23:36:55.309122129Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:55.311696 containerd[1520]: time="2025-07-06T23:36:55.311663756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:36:55.313610 containerd[1520]: time="2025-07-06T23:36:55.313574899Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.926421886s" Jul 6 23:36:55.313610 containerd[1520]: time="2025-07-06T23:36:55.313607862Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 6 23:37:00.777702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:37:00.778074 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.4M memory peak. Jul 6 23:37:00.780414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:37:00.802367 systemd[1]: Reload requested from client PID 2185 ('systemctl') (unit session-7.scope)... Jul 6 23:37:00.802384 systemd[1]: Reloading... Jul 6 23:37:00.869948 zram_generator::config[2230]: No configuration found. Jul 6 23:37:00.947445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:37:01.036806 systemd[1]: Reloading finished in 234 ms. Jul 6 23:37:01.097485 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:37:01.097576 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:37:01.097821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:37:01.097880 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95M memory peak. Jul 6 23:37:01.099606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:37:01.209215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:37:01.213285 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:37:01.253304 kubelet[2272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:37:01.253304 kubelet[2272]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:37:01.253304 kubelet[2272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:37:01.253641 kubelet[2272]: I0706 23:37:01.253342 2272 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:37:01.645068 kubelet[2272]: I0706 23:37:01.645020 2272 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:37:01.645068 kubelet[2272]: I0706 23:37:01.645053 2272 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:37:01.645324 kubelet[2272]: I0706 23:37:01.645295 2272 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:37:01.687764 kubelet[2272]: E0706 23:37:01.687715 2272 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:37:01.690138 kubelet[2272]: I0706 23:37:01.690018 2272 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:37:01.698894 kubelet[2272]: I0706 23:37:01.698864 2272 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:37:01.702518 kubelet[2272]: I0706 23:37:01.702484 2272 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:37:01.703332 kubelet[2272]: I0706 23:37:01.703302 2272 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:37:01.703510 kubelet[2272]: I0706 23:37:01.703473 2272 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:37:01.703698 kubelet[2272]: I0706 23:37:01.703502 2272 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:37:01.704089 kubelet[2272]: I0706 23:37:01.704061 2272 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:37:01.704089 kubelet[2272]: I0706 23:37:01.704077 2272 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:37:01.704610 kubelet[2272]: I0706 23:37:01.704586 2272 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:37:01.707215 kubelet[2272]: I0706 23:37:01.706946 2272 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:37:01.707215 kubelet[2272]: I0706 23:37:01.706977 2272 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:37:01.707215 kubelet[2272]: I0706 23:37:01.706999 2272 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:37:01.707215 kubelet[2272]: I0706 23:37:01.707014 2272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:37:01.718211 kubelet[2272]: W0706 23:37:01.718104 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 6 23:37:01.718211 kubelet[2272]: E0706 23:37:01.718156 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:37:01.719140 kubelet[2272]: I0706 23:37:01.719078 2272 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:37:01.719377 kubelet[2272]: W0706 23:37:01.719335 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 6 23:37:01.719442 kubelet[2272]: E0706 23:37:01.719387 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:37:01.719894 kubelet[2272]: I0706 23:37:01.719879 2272 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:37:01.720146 kubelet[2272]: W0706 23:37:01.720131 2272 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:37:01.721287 kubelet[2272]: I0706 23:37:01.721265 2272 server.go:1274] "Started kubelet" Jul 6 23:37:01.722161 kubelet[2272]: I0706 23:37:01.721630 2272 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:37:01.722161 kubelet[2272]: I0706 23:37:01.722062 2272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:37:01.722671 kubelet[2272]: I0706 23:37:01.722464 2272 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:37:01.723540 kubelet[2272]: I0706 23:37:01.723499 2272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:37:01.723639 kubelet[2272]: I0706 23:37:01.723624 2272 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:37:01.725189 kubelet[2272]: I0706 23:37:01.725166 2272 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:37:01.726470 kubelet[2272]: E0706 23:37:01.725301 2272 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcdc3fd3b0836 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:37:01.721233462 +0000 UTC m=+0.504830844,LastTimestamp:2025-07-06 23:37:01.721233462 +0000 UTC m=+0.504830844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:37:01.726581 kubelet[2272]: I0706 23:37:01.726489 2272 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:37:01.726604 kubelet[2272]: I0706 23:37:01.726591 2272 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:37:01.726688 kubelet[2272]: I0706 23:37:01.726668 2272 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:37:01.726862 kubelet[2272]: E0706 23:37:01.726674 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:37:01.727148 kubelet[2272]: E0706 23:37:01.727115 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Jul 6 23:37:01.727547 kubelet[2272]: W0706 23:37:01.727141 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 6 23:37:01.727680 kubelet[2272]: E0706 23:37:01.727662 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:37:01.728281 kubelet[2272]: I0706 23:37:01.728255 2272 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:37:01.734078 kubelet[2272]: E0706 23:37:01.733892 2272 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:37:01.735709 kubelet[2272]: I0706 23:37:01.735682 2272 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:37:01.735709 kubelet[2272]: I0706 23:37:01.735707 2272 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:37:01.747658 kubelet[2272]: I0706 23:37:01.747623 2272 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:37:01.747658 kubelet[2272]: I0706 23:37:01.747647 2272 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:37:01.747658 kubelet[2272]: I0706 23:37:01.747665 2272 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:37:01.758877 kubelet[2272]: I0706 23:37:01.758680 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:37:01.760500 kubelet[2272]: I0706 23:37:01.760472 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:37:01.760925 kubelet[2272]: I0706 23:37:01.760650 2272 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:37:01.760925 kubelet[2272]: I0706 23:37:01.760816 2272 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:37:01.760925 kubelet[2272]: E0706 23:37:01.760859 2272 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:37:01.761760 kubelet[2272]: W0706 23:37:01.761725 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 6 23:37:01.761933 kubelet[2272]: E0706 23:37:01.761908 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:37:01.827070 kubelet[2272]: E0706 23:37:01.827029 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:37:01.845331 kubelet[2272]: I0706 23:37:01.845309 2272 policy_none.go:49] "None policy: Start" Jul 6 23:37:01.846454 kubelet[2272]: I0706 23:37:01.846406 2272 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:37:01.846454 kubelet[2272]: I0706 23:37:01.846460 2272 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:37:01.852821 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:37:01.861379 kubelet[2272]: E0706 23:37:01.861329 2272 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:37:01.865175 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:37:01.868258 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:37:01.881746 kubelet[2272]: I0706 23:37:01.881706 2272 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:37:01.882117 kubelet[2272]: I0706 23:37:01.881948 2272 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:37:01.882117 kubelet[2272]: I0706 23:37:01.881968 2272 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:37:01.882734 kubelet[2272]: I0706 23:37:01.882615 2272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:37:01.883715 kubelet[2272]: E0706 23:37:01.883678 2272 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:37:01.928917 kubelet[2272]: E0706 23:37:01.928756 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Jul 6 23:37:01.984220 kubelet[2272]: I0706 23:37:01.984165 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:37:01.985030 kubelet[2272]: E0706 23:37:01.984986 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 6 23:37:02.071685 systemd[1]: Created slice kubepods-burstable-podbeddc5da5f6d5961a6c5b4c9558f7539.slice - libcontainer container kubepods-burstable-podbeddc5da5f6d5961a6c5b4c9558f7539.slice. Jul 6 23:37:02.091225 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 6 23:37:02.108653 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 6 23:37:02.128936 kubelet[2272]: I0706 23:37:02.128693 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:02.128936 kubelet[2272]: I0706 23:37:02.128730 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:02.128936 kubelet[2272]: I0706 23:37:02.128755 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:37:02.128936 kubelet[2272]: I0706 23:37:02.128773 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/beddc5da5f6d5961a6c5b4c9558f7539-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"beddc5da5f6d5961a6c5b4c9558f7539\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:37:02.128936 kubelet[2272]: I0706 23:37:02.128788 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/beddc5da5f6d5961a6c5b4c9558f7539-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"beddc5da5f6d5961a6c5b4c9558f7539\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:37:02.129163 kubelet[2272]: I0706 23:37:02.128802 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:02.129163 kubelet[2272]: I0706 23:37:02.128819 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:02.129163 kubelet[2272]: I0706 23:37:02.128832 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/beddc5da5f6d5961a6c5b4c9558f7539-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"beddc5da5f6d5961a6c5b4c9558f7539\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:37:02.129163 kubelet[2272]: I0706 23:37:02.128848 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:02.186381 kubelet[2272]: I0706 23:37:02.186277 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:37:02.187187 kubelet[2272]: E0706 23:37:02.187159 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 6 23:37:02.329676 kubelet[2272]: E0706 23:37:02.329631 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Jul 6 23:37:02.389609 containerd[1520]: time="2025-07-06T23:37:02.389570337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:beddc5da5f6d5961a6c5b4c9558f7539,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:02.406609 containerd[1520]: time="2025-07-06T23:37:02.406563142Z" level=info msg="connecting to shim f656f395bec784a4e84f3e1a75b3fbb36937e4a0d65ed4ca2e0b18108f4217d8" address="unix:///run/containerd/s/d439b6dd42af048467d4d131baed1ae2c6637ec812e88ec0f62444c511e34219" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:02.407531 containerd[1520]: time="2025-07-06T23:37:02.407474254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:02.412220 containerd[1520]: time="2025-07-06T23:37:02.412182646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:02.431230 systemd[1]: Started cri-containerd-f656f395bec784a4e84f3e1a75b3fbb36937e4a0d65ed4ca2e0b18108f4217d8.scope - libcontainer container f656f395bec784a4e84f3e1a75b3fbb36937e4a0d65ed4ca2e0b18108f4217d8. Jul 6 23:37:02.432853 containerd[1520]: time="2025-07-06T23:37:02.432797155Z" level=info msg="connecting to shim 44d7456e0fce807a918a2d1ef51cc6c9af5297b03ec85e4452aea6c821a74be8" address="unix:///run/containerd/s/1840b6dca30e24304c269e660a013d17b8061714b96058ecaab875c315d07018" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:02.449300 containerd[1520]: time="2025-07-06T23:37:02.449197247Z" level=info msg="connecting to shim 588e058f63ff0176281a8df2666ca4ccd2f8df644de7532ace5ee08d1ce03bb4" address="unix:///run/containerd/s/803ea360a915ca5255d4c7c9190091c527cda3ae7c8d9249518cc91c0bee7bfa" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:02.459267 systemd[1]: Started cri-containerd-44d7456e0fce807a918a2d1ef51cc6c9af5297b03ec85e4452aea6c821a74be8.scope - libcontainer container 44d7456e0fce807a918a2d1ef51cc6c9af5297b03ec85e4452aea6c821a74be8. Jul 6 23:37:02.483259 systemd[1]: Started cri-containerd-588e058f63ff0176281a8df2666ca4ccd2f8df644de7532ace5ee08d1ce03bb4.scope - libcontainer container 588e058f63ff0176281a8df2666ca4ccd2f8df644de7532ace5ee08d1ce03bb4. Jul 6 23:37:02.485916 containerd[1520]: time="2025-07-06T23:37:02.485870467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:beddc5da5f6d5961a6c5b4c9558f7539,Namespace:kube-system,Attempt:0,} returns sandbox id \"f656f395bec784a4e84f3e1a75b3fbb36937e4a0d65ed4ca2e0b18108f4217d8\"" Jul 6 23:37:02.493352 containerd[1520]: time="2025-07-06T23:37:02.493305507Z" level=info msg="CreateContainer within sandbox \"f656f395bec784a4e84f3e1a75b3fbb36937e4a0d65ed4ca2e0b18108f4217d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:37:02.502330 containerd[1520]: time="2025-07-06T23:37:02.502025433Z" level=info msg="Container b9268ddd5c06e7e5de92fc02cadd6d9903433229e60027ba33c4f996991fc9c6: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:02.513696 containerd[1520]: time="2025-07-06T23:37:02.513588365Z" level=info msg="CreateContainer within sandbox \"f656f395bec784a4e84f3e1a75b3fbb36937e4a0d65ed4ca2e0b18108f4217d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b9268ddd5c06e7e5de92fc02cadd6d9903433229e60027ba33c4f996991fc9c6\"" Jul 6 23:37:02.514713 containerd[1520]: time="2025-07-06T23:37:02.514681779Z" level=info msg="StartContainer for \"b9268ddd5c06e7e5de92fc02cadd6d9903433229e60027ba33c4f996991fc9c6\"" Jul 6 23:37:02.517035 containerd[1520]: time="2025-07-06T23:37:02.517003542Z" level=info msg="connecting to shim b9268ddd5c06e7e5de92fc02cadd6d9903433229e60027ba33c4f996991fc9c6" address="unix:///run/containerd/s/d439b6dd42af048467d4d131baed1ae2c6637ec812e88ec0f62444c511e34219" protocol=ttrpc version=3 Jul 6 23:37:02.518069 containerd[1520]: time="2025-07-06T23:37:02.518006746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"44d7456e0fce807a918a2d1ef51cc6c9af5297b03ec85e4452aea6c821a74be8\"" Jul 6 23:37:02.521351 containerd[1520]: time="2025-07-06T23:37:02.521309812Z" level=info msg="CreateContainer within sandbox \"44d7456e0fce807a918a2d1ef51cc6c9af5297b03ec85e4452aea6c821a74be8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:37:02.529874 containerd[1520]: time="2025-07-06T23:37:02.529753502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"588e058f63ff0176281a8df2666ca4ccd2f8df644de7532ace5ee08d1ce03bb4\"" Jul 6 23:37:02.531291 containerd[1520]: time="2025-07-06T23:37:02.531256085Z" level=info msg="Container b33cbf91ff0e6ddd3d0cb77b5b435211afc24f2911d363c5a7c2cfe1b8d39219: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:02.532693 containerd[1520]: time="2025-07-06T23:37:02.532662653Z" level=info msg="CreateContainer within sandbox \"588e058f63ff0176281a8df2666ca4ccd2f8df644de7532ace5ee08d1ce03bb4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:37:02.542147 systemd[1]: Started cri-containerd-b9268ddd5c06e7e5de92fc02cadd6d9903433229e60027ba33c4f996991fc9c6.scope - libcontainer container b9268ddd5c06e7e5de92fc02cadd6d9903433229e60027ba33c4f996991fc9c6. Jul 6 23:37:02.545525 containerd[1520]: time="2025-07-06T23:37:02.545385705Z" level=info msg="CreateContainer within sandbox \"44d7456e0fce807a918a2d1ef51cc6c9af5297b03ec85e4452aea6c821a74be8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b33cbf91ff0e6ddd3d0cb77b5b435211afc24f2911d363c5a7c2cfe1b8d39219\"" Jul 6 23:37:02.546515 containerd[1520]: time="2025-07-06T23:37:02.546490771Z" level=info msg="StartContainer for \"b33cbf91ff0e6ddd3d0cb77b5b435211afc24f2911d363c5a7c2cfe1b8d39219\"" Jul 6 23:37:02.547830 containerd[1520]: time="2025-07-06T23:37:02.547741022Z" level=info msg="connecting to shim b33cbf91ff0e6ddd3d0cb77b5b435211afc24f2911d363c5a7c2cfe1b8d39219" address="unix:///run/containerd/s/1840b6dca30e24304c269e660a013d17b8061714b96058ecaab875c315d07018" protocol=ttrpc version=3 Jul 6 23:37:02.549402 containerd[1520]: time="2025-07-06T23:37:02.549374977Z" level=info msg="Container 52b893867880317ee8f12a100573ef3869f3b1c959f9d493896b143c59d6c5e4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:02.551074 kubelet[2272]: W0706 23:37:02.550996 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 6 23:37:02.551138 kubelet[2272]: E0706 23:37:02.551085 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:37:02.556435 containerd[1520]: time="2025-07-06T23:37:02.556320648Z" level=info msg="CreateContainer within sandbox \"588e058f63ff0176281a8df2666ca4ccd2f8df644de7532ace5ee08d1ce03bb4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52b893867880317ee8f12a100573ef3869f3b1c959f9d493896b143c59d6c5e4\"" Jul 6 23:37:02.557035 containerd[1520]: time="2025-07-06T23:37:02.557007976Z" level=info msg="StartContainer for \"52b893867880317ee8f12a100573ef3869f3b1c959f9d493896b143c59d6c5e4\"" Jul 6 23:37:02.558160 containerd[1520]: time="2025-07-06T23:37:02.558125094Z" level=info msg="connecting to shim 52b893867880317ee8f12a100573ef3869f3b1c959f9d493896b143c59d6c5e4" address="unix:///run/containerd/s/803ea360a915ca5255d4c7c9190091c527cda3ae7c8d9249518cc91c0bee7bfa" protocol=ttrpc version=3 Jul 6 23:37:02.574148 systemd[1]: Started cri-containerd-b33cbf91ff0e6ddd3d0cb77b5b435211afc24f2911d363c5a7c2cfe1b8d39219.scope - libcontainer container b33cbf91ff0e6ddd3d0cb77b5b435211afc24f2911d363c5a7c2cfe1b8d39219. Jul 6 23:37:02.578257 systemd[1]: Started cri-containerd-52b893867880317ee8f12a100573ef3869f3b1c959f9d493896b143c59d6c5e4.scope - libcontainer container 52b893867880317ee8f12a100573ef3869f3b1c959f9d493896b143c59d6c5e4. Jul 6 23:37:02.587365 containerd[1520]: time="2025-07-06T23:37:02.587327677Z" level=info msg="StartContainer for \"b9268ddd5c06e7e5de92fc02cadd6d9903433229e60027ba33c4f996991fc9c6\" returns successfully" Jul 6 23:37:02.589352 kubelet[2272]: I0706 23:37:02.589318 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:37:02.590260 kubelet[2272]: E0706 23:37:02.589666 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 6 23:37:02.644786 containerd[1520]: time="2025-07-06T23:37:02.638426773Z" level=info msg="StartContainer for \"52b893867880317ee8f12a100573ef3869f3b1c959f9d493896b143c59d6c5e4\" returns successfully" Jul 6 23:37:02.644786 containerd[1520]: time="2025-07-06T23:37:02.639256123Z" level=info msg="StartContainer for \"b33cbf91ff0e6ddd3d0cb77b5b435211afc24f2911d363c5a7c2cfe1b8d39219\" returns successfully" Jul 6 23:37:02.645841 kubelet[2272]: W0706 23:37:02.645403 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 6 23:37:02.645841 kubelet[2272]: E0706 23:37:02.645472 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:37:03.393723 kubelet[2272]: I0706 23:37:03.393673 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:37:05.690556 kubelet[2272]: E0706 23:37:05.690509 2272 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:37:05.719026 kubelet[2272]: I0706 23:37:05.718988 2272 apiserver.go:52] "Watching apiserver" Jul 6 23:37:05.725021 kubelet[2272]: E0706 23:37:05.724939 2272 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184fcdc3fd3b0836 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:37:01.721233462 +0000 UTC m=+0.504830844,LastTimestamp:2025-07-06 23:37:01.721233462 +0000 UTC m=+0.504830844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:37:05.727486 kubelet[2272]: I0706 23:37:05.727448 2272 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:37:05.758139 kubelet[2272]: I0706 23:37:05.758096 2272 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:37:05.758139 kubelet[2272]: E0706 23:37:05.758148 2272 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:37:06.301321 kubelet[2272]: E0706 23:37:06.301279 2272 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:07.689994 systemd[1]: Reload requested from client PID 2549 ('systemctl') (unit session-7.scope)... Jul 6 23:37:07.690014 systemd[1]: Reloading... Jul 6 23:37:07.761937 zram_generator::config[2592]: No configuration found. Jul 6 23:37:07.837091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:37:07.940499 systemd[1]: Reloading finished in 250 ms. Jul 6 23:37:07.973473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:37:07.988420 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:37:07.989957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:37:07.990015 systemd[1]: kubelet.service: Consumed 979ms CPU time, 128.2M memory peak. Jul 6 23:37:07.992771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:37:08.158789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:37:08.173274 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:37:08.213638 kubelet[2634]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:37:08.213638 kubelet[2634]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:37:08.213638 kubelet[2634]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:37:08.213638 kubelet[2634]: I0706 23:37:08.213384 2634 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:37:08.219088 kubelet[2634]: I0706 23:37:08.219044 2634 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:37:08.219088 kubelet[2634]: I0706 23:37:08.219075 2634 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:37:08.219370 kubelet[2634]: I0706 23:37:08.219345 2634 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:37:08.221318 kubelet[2634]: I0706 23:37:08.221282 2634 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:37:08.223865 kubelet[2634]: I0706 23:37:08.223711 2634 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:37:08.227525 kubelet[2634]: I0706 23:37:08.227496 2634 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:37:08.230292 kubelet[2634]: I0706 23:37:08.230264 2634 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:37:08.230423 kubelet[2634]: I0706 23:37:08.230384 2634 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:37:08.230541 kubelet[2634]: I0706 23:37:08.230505 2634 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:37:08.230771 kubelet[2634]: I0706 23:37:08.230535 2634 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:37:08.230857 kubelet[2634]: I0706 23:37:08.230783 2634 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:37:08.230857 kubelet[2634]: I0706 23:37:08.230793 2634 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:37:08.230857 kubelet[2634]: I0706 23:37:08.230831 2634 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:37:08.230971 kubelet[2634]: I0706 23:37:08.230959 2634 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:37:08.231000 kubelet[2634]: I0706 23:37:08.230976 2634 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:37:08.231000 kubelet[2634]: I0706 23:37:08.230996 2634 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:37:08.231055 kubelet[2634]: I0706 23:37:08.231010 2634 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:37:08.232853 kubelet[2634]: I0706 23:37:08.232077 2634 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:37:08.233435 kubelet[2634]: I0706 23:37:08.233402 2634 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:37:08.233990 kubelet[2634]: I0706 23:37:08.233972 2634 server.go:1274] "Started kubelet" Jul 6 23:37:08.236888 kubelet[2634]: I0706 23:37:08.236861 2634 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:37:08.236983 kubelet[2634]: I0706 23:37:08.236930 2634 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:37:08.237407 kubelet[2634]: I0706 23:37:08.237158 2634 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:37:08.237407 kubelet[2634]: I0706 23:37:08.237365 2634 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:37:08.238173 kubelet[2634]: I0706 23:37:08.235898 2634 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:37:08.240917 kubelet[2634]: I0706 23:37:08.239618 2634 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:37:08.240917 kubelet[2634]: E0706 23:37:08.240169 2634 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:37:08.241019 kubelet[2634]: I0706 23:37:08.241005 2634 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:37:08.242065 kubelet[2634]: I0706 23:37:08.241401 2634 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:37:08.243593 kubelet[2634]: I0706 23:37:08.243453 2634 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:37:08.245414 kubelet[2634]: I0706 23:37:08.245388 2634 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:37:08.245961 kubelet[2634]: I0706 23:37:08.245890 2634 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:37:08.249078 kubelet[2634]: E0706 23:37:08.249046 2634 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:37:08.258254 kubelet[2634]: I0706 23:37:08.258196 2634 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:37:08.263627 kubelet[2634]: I0706 23:37:08.263265 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:37:08.267005 kubelet[2634]: I0706 23:37:08.266973 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:37:08.267237 kubelet[2634]: I0706 23:37:08.267146 2634 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:37:08.267237 kubelet[2634]: I0706 23:37:08.267184 2634 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:37:08.267370 kubelet[2634]: E0706 23:37:08.267345 2634 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:37:08.302738 kubelet[2634]: I0706 23:37:08.302527 2634 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:37:08.302738 kubelet[2634]: I0706 23:37:08.302548 2634 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:37:08.302738 kubelet[2634]: I0706 23:37:08.302577 2634 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:37:08.302738 kubelet[2634]: I0706 23:37:08.302731 2634 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:37:08.302738 kubelet[2634]: I0706 23:37:08.302742 2634 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:37:08.302964 kubelet[2634]: I0706 23:37:08.302760 2634 policy_none.go:49] "None policy: Start" Jul 6 23:37:08.304101 kubelet[2634]: I0706 23:37:08.304072 2634 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:37:08.304101 kubelet[2634]: I0706 23:37:08.304105 2634 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:37:08.304934 kubelet[2634]: I0706 23:37:08.304265 2634 state_mem.go:75] "Updated machine memory state" Jul 6 23:37:08.310333 kubelet[2634]: I0706 23:37:08.310299 2634 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:37:08.310613 kubelet[2634]: I0706 23:37:08.310589 2634 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:37:08.310701 kubelet[2634]: I0706 23:37:08.310611 2634 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:37:08.311213 kubelet[2634]: I0706 23:37:08.311160 2634 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:37:08.374992 kubelet[2634]: E0706 23:37:08.374946 2634 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:37:08.412340 kubelet[2634]: I0706 23:37:08.412276 2634 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:37:08.419813 kubelet[2634]: I0706 23:37:08.419761 2634 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 6 23:37:08.419968 kubelet[2634]: I0706 23:37:08.419869 2634 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:37:08.441767 kubelet[2634]: I0706 23:37:08.441717 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/beddc5da5f6d5961a6c5b4c9558f7539-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"beddc5da5f6d5961a6c5b4c9558f7539\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:37:08.441767 kubelet[2634]: I0706 23:37:08.441757 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/beddc5da5f6d5961a6c5b4c9558f7539-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"beddc5da5f6d5961a6c5b4c9558f7539\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:37:08.441767 kubelet[2634]: I0706 23:37:08.441777 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:08.441987 kubelet[2634]: I0706 23:37:08.441804 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:08.441987 kubelet[2634]: I0706 23:37:08.441821 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/beddc5da5f6d5961a6c5b4c9558f7539-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"beddc5da5f6d5961a6c5b4c9558f7539\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:37:08.441987 kubelet[2634]: I0706 23:37:08.441837 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:08.441987 kubelet[2634]: I0706 23:37:08.441851 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:08.441987 kubelet[2634]: I0706 23:37:08.441869 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:37:08.442096 kubelet[2634]: I0706 23:37:08.441885 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:37:09.231615 kubelet[2634]: I0706 23:37:09.231572 2634 apiserver.go:52] "Watching apiserver" Jul 6 23:37:09.242037 kubelet[2634]: I0706 23:37:09.241987 2634 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:37:09.290415 kubelet[2634]: E0706 23:37:09.290372 2634 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:37:09.310705 kubelet[2634]: I0706 23:37:09.310526 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.310509524 podStartE2EDuration="1.310509524s" podCreationTimestamp="2025-07-06 23:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:37:09.310179074 +0000 UTC m=+1.133577748" watchObservedRunningTime="2025-07-06 23:37:09.310509524 +0000 UTC m=+1.133908238" Jul 6 23:37:09.328438 kubelet[2634]: I0706 23:37:09.328332 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.328313045 podStartE2EDuration="2.328313045s" podCreationTimestamp="2025-07-06 23:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:37:09.319400501 +0000 UTC m=+1.142799215" watchObservedRunningTime="2025-07-06 23:37:09.328313045 +0000 UTC m=+1.151711719" Jul 6 23:37:09.349055 kubelet[2634]: I0706 23:37:09.348978 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.348960084 podStartE2EDuration="1.348960084s" podCreationTimestamp="2025-07-06 23:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:37:09.328520407 +0000 UTC m=+1.151919161" watchObservedRunningTime="2025-07-06 23:37:09.348960084 +0000 UTC m=+1.172358838" Jul 6 23:37:12.630825 kubelet[2634]: I0706 23:37:12.630793 2634 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:37:12.631458 containerd[1520]: time="2025-07-06T23:37:12.631405151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:37:12.631898 kubelet[2634]: I0706 23:37:12.631702 2634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:37:13.621315 systemd[1]: Created slice kubepods-besteffort-pod7e024bcb_dd0d_4602_b74a_6d521189a7ee.slice - libcontainer container kubepods-besteffort-pod7e024bcb_dd0d_4602_b74a_6d521189a7ee.slice. Jul 6 23:37:13.674476 kubelet[2634]: I0706 23:37:13.674420 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e024bcb-dd0d-4602-b74a-6d521189a7ee-lib-modules\") pod \"kube-proxy-rblxn\" (UID: \"7e024bcb-dd0d-4602-b74a-6d521189a7ee\") " pod="kube-system/kube-proxy-rblxn" Jul 6 23:37:13.674861 kubelet[2634]: I0706 23:37:13.674489 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e024bcb-dd0d-4602-b74a-6d521189a7ee-kube-proxy\") pod \"kube-proxy-rblxn\" (UID: \"7e024bcb-dd0d-4602-b74a-6d521189a7ee\") " pod="kube-system/kube-proxy-rblxn" Jul 6 23:37:13.674861 kubelet[2634]: I0706 23:37:13.674508 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e024bcb-dd0d-4602-b74a-6d521189a7ee-xtables-lock\") pod \"kube-proxy-rblxn\" (UID: \"7e024bcb-dd0d-4602-b74a-6d521189a7ee\") " pod="kube-system/kube-proxy-rblxn" Jul 6 23:37:13.674861 kubelet[2634]: I0706 23:37:13.674528 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vqrp\" (UniqueName: \"kubernetes.io/projected/7e024bcb-dd0d-4602-b74a-6d521189a7ee-kube-api-access-9vqrp\") pod \"kube-proxy-rblxn\" (UID: \"7e024bcb-dd0d-4602-b74a-6d521189a7ee\") " pod="kube-system/kube-proxy-rblxn" Jul 6 23:37:13.796671 systemd[1]: Created slice kubepods-besteffort-podf00822ce_1be9_4e12_a69b_80641eb60e0b.slice - libcontainer container kubepods-besteffort-podf00822ce_1be9_4e12_a69b_80641eb60e0b.slice. Jul 6 23:37:13.878623 kubelet[2634]: I0706 23:37:13.875972 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f00822ce-1be9-4e12-a69b-80641eb60e0b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-9w6ls\" (UID: \"f00822ce-1be9-4e12-a69b-80641eb60e0b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9w6ls" Jul 6 23:37:13.878623 kubelet[2634]: I0706 23:37:13.876020 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25ffc\" (UniqueName: \"kubernetes.io/projected/f00822ce-1be9-4e12-a69b-80641eb60e0b-kube-api-access-25ffc\") pod \"tigera-operator-5bf8dfcb4-9w6ls\" (UID: \"f00822ce-1be9-4e12-a69b-80641eb60e0b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9w6ls" Jul 6 23:37:13.955845 containerd[1520]: time="2025-07-06T23:37:13.955790926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rblxn,Uid:7e024bcb-dd0d-4602-b74a-6d521189a7ee,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:13.976447 containerd[1520]: time="2025-07-06T23:37:13.976400117Z" level=info msg="connecting to shim 94186d61e9f5413539b46a94ac14739e9342adb1500f3a10e15d9d596b687ac6" address="unix:///run/containerd/s/f1bd10d444b69c630b33998caae17b47e6e270e826d31e5431ac9ed50b632bf9" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:14.010114 systemd[1]: Started cri-containerd-94186d61e9f5413539b46a94ac14739e9342adb1500f3a10e15d9d596b687ac6.scope - libcontainer container 94186d61e9f5413539b46a94ac14739e9342adb1500f3a10e15d9d596b687ac6. Jul 6 23:37:14.033843 containerd[1520]: time="2025-07-06T23:37:14.033804452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rblxn,Uid:7e024bcb-dd0d-4602-b74a-6d521189a7ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"94186d61e9f5413539b46a94ac14739e9342adb1500f3a10e15d9d596b687ac6\"" Jul 6 23:37:14.037410 containerd[1520]: time="2025-07-06T23:37:14.037029194Z" level=info msg="CreateContainer within sandbox \"94186d61e9f5413539b46a94ac14739e9342adb1500f3a10e15d9d596b687ac6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:37:14.046731 containerd[1520]: time="2025-07-06T23:37:14.046405807Z" level=info msg="Container 029a3d827298d4945c72dffd8f78c585cbf63b53cccfb3ecd054dc4f6ae50338: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:14.053832 containerd[1520]: time="2025-07-06T23:37:14.053776610Z" level=info msg="CreateContainer within sandbox \"94186d61e9f5413539b46a94ac14739e9342adb1500f3a10e15d9d596b687ac6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"029a3d827298d4945c72dffd8f78c585cbf63b53cccfb3ecd054dc4f6ae50338\"" Jul 6 23:37:14.054371 containerd[1520]: time="2025-07-06T23:37:14.054331899Z" level=info msg="StartContainer for \"029a3d827298d4945c72dffd8f78c585cbf63b53cccfb3ecd054dc4f6ae50338\"" Jul 6 23:37:14.056739 containerd[1520]: time="2025-07-06T23:37:14.056658847Z" level=info msg="connecting to shim 029a3d827298d4945c72dffd8f78c585cbf63b53cccfb3ecd054dc4f6ae50338" address="unix:///run/containerd/s/f1bd10d444b69c630b33998caae17b47e6e270e826d31e5431ac9ed50b632bf9" protocol=ttrpc version=3 Jul 6 23:37:14.073072 systemd[1]: Started cri-containerd-029a3d827298d4945c72dffd8f78c585cbf63b53cccfb3ecd054dc4f6ae50338.scope - libcontainer container 029a3d827298d4945c72dffd8f78c585cbf63b53cccfb3ecd054dc4f6ae50338. Jul 6 23:37:14.099584 containerd[1520]: time="2025-07-06T23:37:14.099511326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9w6ls,Uid:f00822ce-1be9-4e12-a69b-80641eb60e0b,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:37:14.110138 containerd[1520]: time="2025-07-06T23:37:14.109203595Z" level=info msg="StartContainer for \"029a3d827298d4945c72dffd8f78c585cbf63b53cccfb3ecd054dc4f6ae50338\" returns successfully" Jul 6 23:37:14.120425 containerd[1520]: time="2025-07-06T23:37:14.119206199Z" level=info msg="connecting to shim 25e70622e6f20c71e02f174d608fc583c11fd97d11218490e5821ebe2f68c9a9" address="unix:///run/containerd/s/fee9e3257d6e72b3302e725a50e466a09081d19ea8dc8ad664240219296072a6" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:14.143122 systemd[1]: Started cri-containerd-25e70622e6f20c71e02f174d608fc583c11fd97d11218490e5821ebe2f68c9a9.scope - libcontainer container 25e70622e6f20c71e02f174d608fc583c11fd97d11218490e5821ebe2f68c9a9. Jul 6 23:37:14.200058 containerd[1520]: time="2025-07-06T23:37:14.199155006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9w6ls,Uid:f00822ce-1be9-4e12-a69b-80641eb60e0b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"25e70622e6f20c71e02f174d608fc583c11fd97d11218490e5821ebe2f68c9a9\"" Jul 6 23:37:14.201582 containerd[1520]: time="2025-07-06T23:37:14.201499200Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:37:14.315392 kubelet[2634]: I0706 23:37:14.315335 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rblxn" podStartSLOduration=1.3152878239999999 podStartE2EDuration="1.315287824s" podCreationTimestamp="2025-07-06 23:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:37:14.304190607 +0000 UTC m=+6.127589361" watchObservedRunningTime="2025-07-06 23:37:14.315287824 +0000 UTC m=+6.138686538" Jul 6 23:37:15.253421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335457681.mount: Deactivated successfully. Jul 6 23:37:17.478066 containerd[1520]: time="2025-07-06T23:37:17.478008555Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:17.479662 containerd[1520]: time="2025-07-06T23:37:17.479588523Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:37:17.480249 containerd[1520]: time="2025-07-06T23:37:17.480210484Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:17.482429 containerd[1520]: time="2025-07-06T23:37:17.482378723Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:17.483526 containerd[1520]: time="2025-07-06T23:37:17.483487170Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 3.28195432s" Jul 6 23:37:17.483580 containerd[1520]: time="2025-07-06T23:37:17.483557908Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:37:17.491107 containerd[1520]: time="2025-07-06T23:37:17.491074129Z" level=info msg="CreateContainer within sandbox \"25e70622e6f20c71e02f174d608fc583c11fd97d11218490e5821ebe2f68c9a9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:37:17.504936 containerd[1520]: time="2025-07-06T23:37:17.501455689Z" level=info msg="Container 24d26cbb2fa6a1a18bc9855011940f110f61ec21f1593bf4900d76e4b05b1400: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:17.507454 containerd[1520]: time="2025-07-06T23:37:17.507405345Z" level=info msg="CreateContainer within sandbox \"25e70622e6f20c71e02f174d608fc583c11fd97d11218490e5821ebe2f68c9a9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"24d26cbb2fa6a1a18bc9855011940f110f61ec21f1593bf4900d76e4b05b1400\"" Jul 6 23:37:17.509079 containerd[1520]: time="2025-07-06T23:37:17.507934642Z" level=info msg="StartContainer for \"24d26cbb2fa6a1a18bc9855011940f110f61ec21f1593bf4900d76e4b05b1400\"" Jul 6 23:37:17.509079 containerd[1520]: time="2025-07-06T23:37:17.508775699Z" level=info msg="connecting to shim 24d26cbb2fa6a1a18bc9855011940f110f61ec21f1593bf4900d76e4b05b1400" address="unix:///run/containerd/s/fee9e3257d6e72b3302e725a50e466a09081d19ea8dc8ad664240219296072a6" protocol=ttrpc version=3 Jul 6 23:37:17.533108 systemd[1]: Started cri-containerd-24d26cbb2fa6a1a18bc9855011940f110f61ec21f1593bf4900d76e4b05b1400.scope - libcontainer container 24d26cbb2fa6a1a18bc9855011940f110f61ec21f1593bf4900d76e4b05b1400. Jul 6 23:37:17.567602 containerd[1520]: time="2025-07-06T23:37:17.567548394Z" level=info msg="StartContainer for \"24d26cbb2fa6a1a18bc9855011940f110f61ec21f1593bf4900d76e4b05b1400\" returns successfully" Jul 6 23:37:20.929827 kubelet[2634]: I0706 23:37:20.929743 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-9w6ls" podStartSLOduration=4.641045 podStartE2EDuration="7.929722555s" podCreationTimestamp="2025-07-06 23:37:13 +0000 UTC" firstStartedPulling="2025-07-06 23:37:14.201101559 +0000 UTC m=+6.024500273" lastFinishedPulling="2025-07-06 23:37:17.489779114 +0000 UTC m=+9.313177828" observedRunningTime="2025-07-06 23:37:18.316177343 +0000 UTC m=+10.139576057" watchObservedRunningTime="2025-07-06 23:37:20.929722555 +0000 UTC m=+12.753121269" Jul 6 23:37:23.258928 sudo[1731]: pam_unix(sudo:session): session closed for user root Jul 6 23:37:23.278363 sshd[1730]: Connection closed by 10.0.0.1 port 57708 Jul 6 23:37:23.282352 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:23.287364 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:57708.service: Deactivated successfully. Jul 6 23:37:23.293073 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:37:23.293572 systemd[1]: session-7.scope: Consumed 7.658s CPU time, 229.2M memory peak. Jul 6 23:37:23.296337 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:37:23.299833 systemd-logind[1503]: Removed session 7. Jul 6 23:37:23.404462 update_engine[1508]: I20250706 23:37:23.403931 1508 update_attempter.cc:509] Updating boot flags... Jul 6 23:37:24.781786 systemd[1]: Created slice kubepods-besteffort-podeb0416b2_3689_47ba_8c21_c2b3d90cbb1e.slice - libcontainer container kubepods-besteffort-podeb0416b2_3689_47ba_8c21_c2b3d90cbb1e.slice. Jul 6 23:37:24.871037 systemd[1]: Created slice kubepods-besteffort-pod127053a6_e350_4a34_a70b_ea036f548b69.slice - libcontainer container kubepods-besteffort-pod127053a6_e350_4a34_a70b_ea036f548b69.slice. Jul 6 23:37:24.945289 kubelet[2634]: I0706 23:37:24.945238 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0416b2-3689-47ba-8c21-c2b3d90cbb1e-tigera-ca-bundle\") pod \"calico-typha-77b66799fb-hzr98\" (UID: \"eb0416b2-3689-47ba-8c21-c2b3d90cbb1e\") " pod="calico-system/calico-typha-77b66799fb-hzr98" Jul 6 23:37:24.945289 kubelet[2634]: I0706 23:37:24.945289 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eb0416b2-3689-47ba-8c21-c2b3d90cbb1e-typha-certs\") pod \"calico-typha-77b66799fb-hzr98\" (UID: \"eb0416b2-3689-47ba-8c21-c2b3d90cbb1e\") " pod="calico-system/calico-typha-77b66799fb-hzr98" Jul 6 23:37:24.946016 kubelet[2634]: I0706 23:37:24.945324 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2w9r\" (UniqueName: \"kubernetes.io/projected/eb0416b2-3689-47ba-8c21-c2b3d90cbb1e-kube-api-access-j2w9r\") pod \"calico-typha-77b66799fb-hzr98\" (UID: \"eb0416b2-3689-47ba-8c21-c2b3d90cbb1e\") " pod="calico-system/calico-typha-77b66799fb-hzr98" Jul 6 23:37:24.973763 kubelet[2634]: E0706 23:37:24.973702 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fflkr" podUID="d990a250-f885-4c68-b48b-89990a4fd720" Jul 6 23:37:25.045775 kubelet[2634]: I0706 23:37:25.045660 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-cni-bin-dir\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.045775 kubelet[2634]: I0706 23:37:25.045711 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/127053a6-e350-4a34-a70b-ea036f548b69-tigera-ca-bundle\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.045775 kubelet[2634]: I0706 23:37:25.045731 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-var-run-calico\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.045775 kubelet[2634]: I0706 23:37:25.045752 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbqm\" (UniqueName: \"kubernetes.io/projected/127053a6-e350-4a34-a70b-ea036f548b69-kube-api-access-bkbqm\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.045995 kubelet[2634]: I0706 23:37:25.045787 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-cni-log-dir\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.045995 kubelet[2634]: I0706 23:37:25.045805 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-xtables-lock\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.045995 kubelet[2634]: I0706 23:37:25.045824 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-cni-net-dir\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.045995 kubelet[2634]: I0706 23:37:25.045840 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-policysync\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.045995 kubelet[2634]: I0706 23:37:25.045856 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-var-lib-calico\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.046127 kubelet[2634]: I0706 23:37:25.045886 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-flexvol-driver-host\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.046127 kubelet[2634]: I0706 23:37:25.045927 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/127053a6-e350-4a34-a70b-ea036f548b69-lib-modules\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.046127 kubelet[2634]: I0706 23:37:25.045948 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/127053a6-e350-4a34-a70b-ea036f548b69-node-certs\") pod \"calico-node-jj67s\" (UID: \"127053a6-e350-4a34-a70b-ea036f548b69\") " pod="calico-system/calico-node-jj67s" Jul 6 23:37:25.090814 containerd[1520]: time="2025-07-06T23:37:25.090762454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b66799fb-hzr98,Uid:eb0416b2-3689-47ba-8c21-c2b3d90cbb1e,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:25.146764 kubelet[2634]: I0706 23:37:25.146711 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d990a250-f885-4c68-b48b-89990a4fd720-socket-dir\") pod \"csi-node-driver-fflkr\" (UID: \"d990a250-f885-4c68-b48b-89990a4fd720\") " pod="calico-system/csi-node-driver-fflkr" Jul 6 23:37:25.146897 kubelet[2634]: I0706 23:37:25.146789 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d990a250-f885-4c68-b48b-89990a4fd720-varrun\") pod \"csi-node-driver-fflkr\" (UID: \"d990a250-f885-4c68-b48b-89990a4fd720\") " pod="calico-system/csi-node-driver-fflkr" Jul 6 23:37:25.146897 kubelet[2634]: I0706 23:37:25.146835 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d990a250-f885-4c68-b48b-89990a4fd720-kubelet-dir\") pod \"csi-node-driver-fflkr\" (UID: \"d990a250-f885-4c68-b48b-89990a4fd720\") " pod="calico-system/csi-node-driver-fflkr" Jul 6 23:37:25.146897 kubelet[2634]: I0706 23:37:25.146853 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d990a250-f885-4c68-b48b-89990a4fd720-registration-dir\") pod \"csi-node-driver-fflkr\" (UID: \"d990a250-f885-4c68-b48b-89990a4fd720\") " pod="calico-system/csi-node-driver-fflkr" Jul 6 23:37:25.146897 kubelet[2634]: I0706 23:37:25.146872 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntdcg\" (UniqueName: \"kubernetes.io/projected/d990a250-f885-4c68-b48b-89990a4fd720-kube-api-access-ntdcg\") pod \"csi-node-driver-fflkr\" (UID: \"d990a250-f885-4c68-b48b-89990a4fd720\") " pod="calico-system/csi-node-driver-fflkr" Jul 6 23:37:25.161292 containerd[1520]: time="2025-07-06T23:37:25.161235798Z" level=info msg="connecting to shim 1406feae95e2cac998ba94f859ae8ed6bb58b897b8813b169ec0d229f977c7ce" address="unix:///run/containerd/s/91f8604a4afeac2361af1e432a8f883f1ed371c8a9800547af909d016a8f345e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:25.177371 kubelet[2634]: E0706 23:37:25.177335 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.177371 kubelet[2634]: W0706 23:37:25.177360 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.177371 kubelet[2634]: E0706 23:37:25.177380 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.218133 systemd[1]: Started cri-containerd-1406feae95e2cac998ba94f859ae8ed6bb58b897b8813b169ec0d229f977c7ce.scope - libcontainer container 1406feae95e2cac998ba94f859ae8ed6bb58b897b8813b169ec0d229f977c7ce. Jul 6 23:37:25.248956 kubelet[2634]: E0706 23:37:25.248523 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.248956 kubelet[2634]: W0706 23:37:25.248553 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.248956 kubelet[2634]: E0706 23:37:25.248575 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.248956 kubelet[2634]: E0706 23:37:25.248866 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.248956 kubelet[2634]: W0706 23:37:25.248876 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.248956 kubelet[2634]: E0706 23:37:25.248930 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.249261 kubelet[2634]: E0706 23:37:25.249129 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.249261 kubelet[2634]: W0706 23:37:25.249139 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.249261 kubelet[2634]: E0706 23:37:25.249168 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.249384 kubelet[2634]: E0706 23:37:25.249360 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.249384 kubelet[2634]: W0706 23:37:25.249373 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.250114 kubelet[2634]: E0706 23:37:25.250048 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.250114 kubelet[2634]: E0706 23:37:25.249901 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.250226 kubelet[2634]: W0706 23:37:25.250124 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.250226 kubelet[2634]: E0706 23:37:25.250135 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.250444 kubelet[2634]: E0706 23:37:25.250420 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.250487 kubelet[2634]: W0706 23:37:25.250448 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.250487 kubelet[2634]: E0706 23:37:25.250465 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.250695 kubelet[2634]: E0706 23:37:25.250679 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.250695 kubelet[2634]: W0706 23:37:25.250690 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.250765 kubelet[2634]: E0706 23:37:25.250707 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.250943 kubelet[2634]: E0706 23:37:25.250898 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.250943 kubelet[2634]: W0706 23:37:25.250932 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.251015 kubelet[2634]: E0706 23:37:25.250947 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.251260 kubelet[2634]: E0706 23:37:25.251233 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.251260 kubelet[2634]: W0706 23:37:25.251258 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.251338 kubelet[2634]: E0706 23:37:25.251288 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.251570 kubelet[2634]: E0706 23:37:25.251547 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.251570 kubelet[2634]: W0706 23:37:25.251565 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.251645 kubelet[2634]: E0706 23:37:25.251609 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.254059 kubelet[2634]: E0706 23:37:25.254023 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.254059 kubelet[2634]: W0706 23:37:25.254052 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.254866 kubelet[2634]: E0706 23:37:25.254838 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.255586 kubelet[2634]: E0706 23:37:25.255413 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.255586 kubelet[2634]: W0706 23:37:25.255429 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.256029 kubelet[2634]: E0706 23:37:25.255991 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.257969 kubelet[2634]: E0706 23:37:25.257944 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.257969 kubelet[2634]: W0706 23:37:25.257964 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.258079 kubelet[2634]: E0706 23:37:25.258030 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.258244 kubelet[2634]: E0706 23:37:25.258224 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.258244 kubelet[2634]: W0706 23:37:25.258240 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.258399 kubelet[2634]: E0706 23:37:25.258280 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.258421 kubelet[2634]: E0706 23:37:25.258400 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.258421 kubelet[2634]: W0706 23:37:25.258409 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.258464 kubelet[2634]: E0706 23:37:25.258438 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.258573 kubelet[2634]: E0706 23:37:25.258557 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.258573 kubelet[2634]: W0706 23:37:25.258569 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.258618 kubelet[2634]: E0706 23:37:25.258591 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.258751 kubelet[2634]: E0706 23:37:25.258736 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.258791 kubelet[2634]: W0706 23:37:25.258755 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.258791 kubelet[2634]: E0706 23:37:25.258772 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.259283 kubelet[2634]: E0706 23:37:25.259265 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.259283 kubelet[2634]: W0706 23:37:25.259280 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.259985 kubelet[2634]: E0706 23:37:25.259944 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.261079 kubelet[2634]: E0706 23:37:25.261024 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.261079 kubelet[2634]: W0706 23:37:25.261044 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.261079 kubelet[2634]: E0706 23:37:25.261064 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.261416 kubelet[2634]: E0706 23:37:25.261387 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.261416 kubelet[2634]: W0706 23:37:25.261407 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.261496 kubelet[2634]: E0706 23:37:25.261427 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.261633 kubelet[2634]: E0706 23:37:25.261610 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.261633 kubelet[2634]: W0706 23:37:25.261622 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.261692 kubelet[2634]: E0706 23:37:25.261646 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.261840 kubelet[2634]: E0706 23:37:25.261825 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.261840 kubelet[2634]: W0706 23:37:25.261838 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.261898 kubelet[2634]: E0706 23:37:25.261869 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.262057 kubelet[2634]: E0706 23:37:25.262042 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.262057 kubelet[2634]: W0706 23:37:25.262055 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.262184 kubelet[2634]: E0706 23:37:25.262073 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.262600 kubelet[2634]: E0706 23:37:25.262576 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.262600 kubelet[2634]: W0706 23:37:25.262596 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.262684 kubelet[2634]: E0706 23:37:25.262609 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.274731 kubelet[2634]: E0706 23:37:25.274681 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.274731 kubelet[2634]: W0706 23:37:25.274712 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.274731 kubelet[2634]: E0706 23:37:25.274732 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.276524 kubelet[2634]: E0706 23:37:25.276421 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:25.276524 kubelet[2634]: W0706 23:37:25.276476 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:25.276524 kubelet[2634]: E0706 23:37:25.276493 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:25.278498 containerd[1520]: time="2025-07-06T23:37:25.278362847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b66799fb-hzr98,Uid:eb0416b2-3689-47ba-8c21-c2b3d90cbb1e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1406feae95e2cac998ba94f859ae8ed6bb58b897b8813b169ec0d229f977c7ce\"" Jul 6 23:37:25.294459 containerd[1520]: time="2025-07-06T23:37:25.294402553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:37:25.476980 containerd[1520]: time="2025-07-06T23:37:25.476300410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jj67s,Uid:127053a6-e350-4a34-a70b-ea036f548b69,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:25.514674 containerd[1520]: time="2025-07-06T23:37:25.514452461Z" level=info msg="connecting to shim 712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190" address="unix:///run/containerd/s/926438006c31733938a9532a85db33e5629db637f01137366a401bfc52df4c3e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:25.561118 systemd[1]: Started cri-containerd-712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190.scope - libcontainer container 712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190. Jul 6 23:37:25.595284 containerd[1520]: time="2025-07-06T23:37:25.595241330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jj67s,Uid:127053a6-e350-4a34-a70b-ea036f548b69,Namespace:calico-system,Attempt:0,} returns sandbox id \"712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190\"" Jul 6 23:37:26.183560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060329019.mount: Deactivated successfully. Jul 6 23:37:26.705305 containerd[1520]: time="2025-07-06T23:37:26.705250990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:26.705694 containerd[1520]: time="2025-07-06T23:37:26.705654056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:37:26.706543 containerd[1520]: time="2025-07-06T23:37:26.706506195Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:26.708699 containerd[1520]: time="2025-07-06T23:37:26.708662427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:26.709685 containerd[1520]: time="2025-07-06T23:37:26.709649588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.415024917s" Jul 6 23:37:26.709685 containerd[1520]: time="2025-07-06T23:37:26.709682913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:37:26.712243 containerd[1520]: time="2025-07-06T23:37:26.712209406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:37:26.733498 containerd[1520]: time="2025-07-06T23:37:26.732887139Z" level=info msg="CreateContainer within sandbox \"1406feae95e2cac998ba94f859ae8ed6bb58b897b8813b169ec0d229f977c7ce\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:37:26.739124 containerd[1520]: time="2025-07-06T23:37:26.739087550Z" level=info msg="Container 042661f7afa38d465c09535d54ab08c63a4eda928751f53a75cfebc983415ee0: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:26.751141 containerd[1520]: time="2025-07-06T23:37:26.751073425Z" level=info msg="CreateContainer within sandbox \"1406feae95e2cac998ba94f859ae8ed6bb58b897b8813b169ec0d229f977c7ce\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"042661f7afa38d465c09535d54ab08c63a4eda928751f53a75cfebc983415ee0\"" Jul 6 23:37:26.751810 containerd[1520]: time="2025-07-06T23:37:26.751765858Z" level=info msg="StartContainer for \"042661f7afa38d465c09535d54ab08c63a4eda928751f53a75cfebc983415ee0\"" Jul 6 23:37:26.754037 containerd[1520]: time="2025-07-06T23:37:26.753996222Z" level=info msg="connecting to shim 042661f7afa38d465c09535d54ab08c63a4eda928751f53a75cfebc983415ee0" address="unix:///run/containerd/s/91f8604a4afeac2361af1e432a8f883f1ed371c8a9800547af909d016a8f345e" protocol=ttrpc version=3 Jul 6 23:37:26.771100 systemd[1]: Started cri-containerd-042661f7afa38d465c09535d54ab08c63a4eda928751f53a75cfebc983415ee0.scope - libcontainer container 042661f7afa38d465c09535d54ab08c63a4eda928751f53a75cfebc983415ee0. Jul 6 23:37:26.815817 containerd[1520]: time="2025-07-06T23:37:26.815782221Z" level=info msg="StartContainer for \"042661f7afa38d465c09535d54ab08c63a4eda928751f53a75cfebc983415ee0\" returns successfully" Jul 6 23:37:27.268497 kubelet[2634]: E0706 23:37:27.268432 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fflkr" podUID="d990a250-f885-4c68-b48b-89990a4fd720" Jul 6 23:37:27.349505 kubelet[2634]: I0706 23:37:27.348045 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77b66799fb-hzr98" podStartSLOduration=1.924757918 podStartE2EDuration="3.348027187s" podCreationTimestamp="2025-07-06 23:37:24 +0000 UTC" firstStartedPulling="2025-07-06 23:37:25.288832799 +0000 UTC m=+17.112231513" lastFinishedPulling="2025-07-06 23:37:26.712102108 +0000 UTC m=+18.535500782" observedRunningTime="2025-07-06 23:37:27.347861602 +0000 UTC m=+19.171260316" watchObservedRunningTime="2025-07-06 23:37:27.348027187 +0000 UTC m=+19.171425901" Jul 6 23:37:27.364680 kubelet[2634]: E0706 23:37:27.364336 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.364680 kubelet[2634]: W0706 23:37:27.364360 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.364680 kubelet[2634]: E0706 23:37:27.364382 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.364680 kubelet[2634]: E0706 23:37:27.364589 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.364680 kubelet[2634]: W0706 23:37:27.364597 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.364680 kubelet[2634]: E0706 23:37:27.364607 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.364932 kubelet[2634]: E0706 23:37:27.364744 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.364932 kubelet[2634]: W0706 23:37:27.364752 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.364932 kubelet[2634]: E0706 23:37:27.364759 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.364932 kubelet[2634]: E0706 23:37:27.364879 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.364932 kubelet[2634]: W0706 23:37:27.364886 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.364932 kubelet[2634]: E0706 23:37:27.364894 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.365056 kubelet[2634]: E0706 23:37:27.365048 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.365078 kubelet[2634]: W0706 23:37:27.365056 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.365078 kubelet[2634]: E0706 23:37:27.365065 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.365611 kubelet[2634]: E0706 23:37:27.365179 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.365611 kubelet[2634]: W0706 23:37:27.365190 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.365611 kubelet[2634]: E0706 23:37:27.365197 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.365611 kubelet[2634]: E0706 23:37:27.365307 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.365611 kubelet[2634]: W0706 23:37:27.365314 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.365611 kubelet[2634]: E0706 23:37:27.365320 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.365611 kubelet[2634]: E0706 23:37:27.365441 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.365611 kubelet[2634]: W0706 23:37:27.365447 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.365611 kubelet[2634]: E0706 23:37:27.365454 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.365853 kubelet[2634]: E0706 23:37:27.365628 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.365853 kubelet[2634]: W0706 23:37:27.365637 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.365853 kubelet[2634]: E0706 23:37:27.365646 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.365853 kubelet[2634]: E0706 23:37:27.365774 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.365853 kubelet[2634]: W0706 23:37:27.365786 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.365853 kubelet[2634]: E0706 23:37:27.365793 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.365999 kubelet[2634]: E0706 23:37:27.365932 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.365999 kubelet[2634]: W0706 23:37:27.365940 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.365999 kubelet[2634]: E0706 23:37:27.365947 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.366137 kubelet[2634]: E0706 23:37:27.366092 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.366137 kubelet[2634]: W0706 23:37:27.366102 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.366137 kubelet[2634]: E0706 23:37:27.366110 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.366282 kubelet[2634]: E0706 23:37:27.366242 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.366282 kubelet[2634]: W0706 23:37:27.366249 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.366282 kubelet[2634]: E0706 23:37:27.366257 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.366392 kubelet[2634]: E0706 23:37:27.366377 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.366392 kubelet[2634]: W0706 23:37:27.366387 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.366448 kubelet[2634]: E0706 23:37:27.366394 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.366526 kubelet[2634]: E0706 23:37:27.366513 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.366526 kubelet[2634]: W0706 23:37:27.366523 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.366574 kubelet[2634]: E0706 23:37:27.366529 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.367922 kubelet[2634]: E0706 23:37:27.367879 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.368307 kubelet[2634]: W0706 23:37:27.367901 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.368307 kubelet[2634]: E0706 23:37:27.368304 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.368569 kubelet[2634]: E0706 23:37:27.368536 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.368569 kubelet[2634]: W0706 23:37:27.368552 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.368647 kubelet[2634]: E0706 23:37:27.368573 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.368774 kubelet[2634]: E0706 23:37:27.368760 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.368774 kubelet[2634]: W0706 23:37:27.368774 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.368832 kubelet[2634]: E0706 23:37:27.368788 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.369012 kubelet[2634]: E0706 23:37:27.368996 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.369012 kubelet[2634]: W0706 23:37:27.369010 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.369457 kubelet[2634]: E0706 23:37:27.369024 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.369457 kubelet[2634]: E0706 23:37:27.369271 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.369457 kubelet[2634]: W0706 23:37:27.369281 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.369457 kubelet[2634]: E0706 23:37:27.369292 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.369457 kubelet[2634]: E0706 23:37:27.369440 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.369658 kubelet[2634]: W0706 23:37:27.369467 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.369658 kubelet[2634]: E0706 23:37:27.369483 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.369709 kubelet[2634]: E0706 23:37:27.369682 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.369709 kubelet[2634]: W0706 23:37:27.369692 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.369883 kubelet[2634]: E0706 23:37:27.369759 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.369972 kubelet[2634]: E0706 23:37:27.369956 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.369972 kubelet[2634]: W0706 23:37:27.369966 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.370029 kubelet[2634]: E0706 23:37:27.369995 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.370454 kubelet[2634]: E0706 23:37:27.370144 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.370454 kubelet[2634]: W0706 23:37:27.370166 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.370454 kubelet[2634]: E0706 23:37:27.370216 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.370454 kubelet[2634]: E0706 23:37:27.370344 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.370454 kubelet[2634]: W0706 23:37:27.370353 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.370454 kubelet[2634]: E0706 23:37:27.370361 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.371505 kubelet[2634]: E0706 23:37:27.371475 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.371505 kubelet[2634]: W0706 23:37:27.371502 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.371589 kubelet[2634]: E0706 23:37:27.371516 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.372311 kubelet[2634]: E0706 23:37:27.372199 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.372311 kubelet[2634]: W0706 23:37:27.372216 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.372690 kubelet[2634]: E0706 23:37:27.372661 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.372771 kubelet[2634]: W0706 23:37:27.372757 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.372953 kubelet[2634]: E0706 23:37:27.372845 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.372953 kubelet[2634]: E0706 23:37:27.372767 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.373089 kubelet[2634]: E0706 23:37:27.373074 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.373282 kubelet[2634]: W0706 23:37:27.373144 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.373282 kubelet[2634]: E0706 23:37:27.373168 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.373510 kubelet[2634]: E0706 23:37:27.373420 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.373685 kubelet[2634]: W0706 23:37:27.373667 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.374228 kubelet[2634]: E0706 23:37:27.373757 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.374228 kubelet[2634]: E0706 23:37:27.374157 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.374228 kubelet[2634]: W0706 23:37:27.374198 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.374228 kubelet[2634]: E0706 23:37:27.374221 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.374633 kubelet[2634]: E0706 23:37:27.374599 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.374683 kubelet[2634]: W0706 23:37:27.374617 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.374683 kubelet[2634]: E0706 23:37:27.374655 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.374988 kubelet[2634]: E0706 23:37:27.374961 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:37:27.374988 kubelet[2634]: W0706 23:37:27.374980 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:37:27.374988 kubelet[2634]: E0706 23:37:27.374990 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:37:27.855499 containerd[1520]: time="2025-07-06T23:37:27.855429611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:27.857292 containerd[1520]: time="2025-07-06T23:37:27.857259376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:37:27.858593 containerd[1520]: time="2025-07-06T23:37:27.858547296Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:27.862789 containerd[1520]: time="2025-07-06T23:37:27.862455104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:27.863538 containerd[1520]: time="2025-07-06T23:37:27.863495586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.151255736s" Jul 6 23:37:27.863538 containerd[1520]: time="2025-07-06T23:37:27.863533632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:37:27.868282 containerd[1520]: time="2025-07-06T23:37:27.868227283Z" level=info msg="CreateContainer within sandbox \"712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:37:27.875029 containerd[1520]: time="2025-07-06T23:37:27.874981453Z" level=info msg="Container 75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:27.882221 containerd[1520]: time="2025-07-06T23:37:27.882172132Z" level=info msg="CreateContainer within sandbox \"712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c\"" Jul 6 23:37:27.882705 containerd[1520]: time="2025-07-06T23:37:27.882668289Z" level=info msg="StartContainer for \"75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c\"" Jul 6 23:37:27.885886 containerd[1520]: time="2025-07-06T23:37:27.885764371Z" level=info msg="connecting to shim 75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c" address="unix:///run/containerd/s/926438006c31733938a9532a85db33e5629db637f01137366a401bfc52df4c3e" protocol=ttrpc version=3 Jul 6 23:37:27.911140 systemd[1]: Started cri-containerd-75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c.scope - libcontainer container 75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c. Jul 6 23:37:27.956434 containerd[1520]: time="2025-07-06T23:37:27.956346953Z" level=info msg="StartContainer for \"75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c\" returns successfully" Jul 6 23:37:27.985126 systemd[1]: cri-containerd-75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c.scope: Deactivated successfully. Jul 6 23:37:28.010750 containerd[1520]: time="2025-07-06T23:37:28.010697459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c\" id:\"75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c\" pid:3284 exited_at:{seconds:1751845048 nanos:4561388}" Jul 6 23:37:28.011648 containerd[1520]: time="2025-07-06T23:37:28.011579510Z" level=info msg="received exit event container_id:\"75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c\" id:\"75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c\" pid:3284 exited_at:{seconds:1751845048 nanos:4561388}" Jul 6 23:37:28.065341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75604d53ed78fc1da98c17c9852c78eea3ead6341171b4496fd7f619cb98e55c-rootfs.mount: Deactivated successfully. Jul 6 23:37:28.339467 kubelet[2634]: I0706 23:37:28.339420 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:37:28.341413 containerd[1520]: time="2025-07-06T23:37:28.341371128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:37:29.268268 kubelet[2634]: E0706 23:37:29.268216 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fflkr" podUID="d990a250-f885-4c68-b48b-89990a4fd720" Jul 6 23:37:31.217390 containerd[1520]: time="2025-07-06T23:37:31.217343206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:31.218228 containerd[1520]: time="2025-07-06T23:37:31.218025294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:37:31.218847 containerd[1520]: time="2025-07-06T23:37:31.218810796Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:31.220743 containerd[1520]: time="2025-07-06T23:37:31.220703882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:31.221482 containerd[1520]: time="2025-07-06T23:37:31.221456220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.880027123s" Jul 6 23:37:31.221583 containerd[1520]: time="2025-07-06T23:37:31.221565834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:37:31.224613 containerd[1520]: time="2025-07-06T23:37:31.224577465Z" level=info msg="CreateContainer within sandbox \"712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:37:31.254229 containerd[1520]: time="2025-07-06T23:37:31.254179388Z" level=info msg="Container bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:31.263217 containerd[1520]: time="2025-07-06T23:37:31.263146832Z" level=info msg="CreateContainer within sandbox \"712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1\"" Jul 6 23:37:31.264512 containerd[1520]: time="2025-07-06T23:37:31.264484886Z" level=info msg="StartContainer for \"bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1\"" Jul 6 23:37:31.266880 containerd[1520]: time="2025-07-06T23:37:31.266838512Z" level=info msg="connecting to shim bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1" address="unix:///run/containerd/s/926438006c31733938a9532a85db33e5629db637f01137366a401bfc52df4c3e" protocol=ttrpc version=3 Jul 6 23:37:31.268457 kubelet[2634]: E0706 23:37:31.267987 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fflkr" podUID="d990a250-f885-4c68-b48b-89990a4fd720" Jul 6 23:37:31.294353 systemd[1]: Started cri-containerd-bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1.scope - libcontainer container bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1. Jul 6 23:37:31.331381 containerd[1520]: time="2025-07-06T23:37:31.331331405Z" level=info msg="StartContainer for \"bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1\" returns successfully" Jul 6 23:37:31.949626 systemd[1]: cri-containerd-bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1.scope: Deactivated successfully. Jul 6 23:37:31.949918 systemd[1]: cri-containerd-bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1.scope: Consumed 498ms CPU time, 175.4M memory peak, 2.7M read from disk, 165.8M written to disk. Jul 6 23:37:31.951023 containerd[1520]: time="2025-07-06T23:37:31.950993698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1\" id:\"bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1\" pid:3344 exited_at:{seconds:1751845051 nanos:950519996}" Jul 6 23:37:31.960812 containerd[1520]: time="2025-07-06T23:37:31.960755005Z" level=info msg="received exit event container_id:\"bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1\" id:\"bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1\" pid:3344 exited_at:{seconds:1751845051 nanos:950519996}" Jul 6 23:37:31.983982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd89ee85d3deb90012d0a0ec9fbfeaaf721e584d823ede24ba6915089fbdf2c1-rootfs.mount: Deactivated successfully. Jul 6 23:37:32.000919 kubelet[2634]: I0706 23:37:32.000887 2634 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:37:32.101435 systemd[1]: Created slice kubepods-besteffort-pod1323ebd3_6c31_4106_8489_61ffc77fae0f.slice - libcontainer container kubepods-besteffort-pod1323ebd3_6c31_4106_8489_61ffc77fae0f.slice. Jul 6 23:37:32.115658 systemd[1]: Created slice kubepods-besteffort-pod563e0255_c29b_4292_9f17_d6aef8b5cd13.slice - libcontainer container kubepods-besteffort-pod563e0255_c29b_4292_9f17_d6aef8b5cd13.slice. Jul 6 23:37:32.124279 systemd[1]: Created slice kubepods-besteffort-podb1b7190a_9678_4dec_abee_43c139b7b0b4.slice - libcontainer container kubepods-besteffort-podb1b7190a_9678_4dec_abee_43c139b7b0b4.slice. Jul 6 23:37:32.133211 systemd[1]: Created slice kubepods-burstable-pod64788c1f_04c1_4e4c_8486_e17aa8abc6f7.slice - libcontainer container kubepods-burstable-pod64788c1f_04c1_4e4c_8486_e17aa8abc6f7.slice. Jul 6 23:37:32.141425 systemd[1]: Created slice kubepods-burstable-pod8d06d505_6876_4335_b79e_3060be27b430.slice - libcontainer container kubepods-burstable-pod8d06d505_6876_4335_b79e_3060be27b430.slice. Jul 6 23:37:32.148319 systemd[1]: Created slice kubepods-besteffort-podb41952be_a6bf_4a79_b50e_e43b4dd09769.slice - libcontainer container kubepods-besteffort-podb41952be_a6bf_4a79_b50e_e43b4dd09769.slice. Jul 6 23:37:32.157705 systemd[1]: Created slice kubepods-besteffort-pod552500ee_cc5f_44cf_95f3_d22a77a37375.slice - libcontainer container kubepods-besteffort-pod552500ee_cc5f_44cf_95f3_d22a77a37375.slice. Jul 6 23:37:32.201978 kubelet[2634]: I0706 23:37:32.201117 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx8dw\" (UniqueName: \"kubernetes.io/projected/8d06d505-6876-4335-b79e-3060be27b430-kube-api-access-bx8dw\") pod \"coredns-7c65d6cfc9-bk6h2\" (UID: \"8d06d505-6876-4335-b79e-3060be27b430\") " pod="kube-system/coredns-7c65d6cfc9-bk6h2" Jul 6 23:37:32.201978 kubelet[2634]: I0706 23:37:32.201174 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/552500ee-cc5f-44cf-95f3-d22a77a37375-whisker-backend-key-pair\") pod \"whisker-5f94dd8485-cxgxl\" (UID: \"552500ee-cc5f-44cf-95f3-d22a77a37375\") " pod="calico-system/whisker-5f94dd8485-cxgxl" Jul 6 23:37:32.201978 kubelet[2634]: I0706 23:37:32.201198 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brfnq\" (UniqueName: \"kubernetes.io/projected/b41952be-a6bf-4a79-b50e-e43b4dd09769-kube-api-access-brfnq\") pod \"calico-apiserver-786b9888cb-v8zz6\" (UID: \"b41952be-a6bf-4a79-b50e-e43b4dd09769\") " pod="calico-apiserver/calico-apiserver-786b9888cb-v8zz6" Jul 6 23:37:32.201978 kubelet[2634]: I0706 23:37:32.201218 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/563e0255-c29b-4292-9f17-d6aef8b5cd13-goldmane-key-pair\") pod \"goldmane-58fd7646b9-shs2j\" (UID: \"563e0255-c29b-4292-9f17-d6aef8b5cd13\") " pod="calico-system/goldmane-58fd7646b9-shs2j" Jul 6 23:37:32.201978 kubelet[2634]: I0706 23:37:32.201237 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552500ee-cc5f-44cf-95f3-d22a77a37375-whisker-ca-bundle\") pod \"whisker-5f94dd8485-cxgxl\" (UID: \"552500ee-cc5f-44cf-95f3-d22a77a37375\") " pod="calico-system/whisker-5f94dd8485-cxgxl" Jul 6 23:37:32.202224 kubelet[2634]: I0706 23:37:32.201255 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64788c1f-04c1-4e4c-8486-e17aa8abc6f7-config-volume\") pod \"coredns-7c65d6cfc9-9ngv9\" (UID: \"64788c1f-04c1-4e4c-8486-e17aa8abc6f7\") " pod="kube-system/coredns-7c65d6cfc9-9ngv9" Jul 6 23:37:32.202224 kubelet[2634]: I0706 23:37:32.201273 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdgs\" (UniqueName: \"kubernetes.io/projected/64788c1f-04c1-4e4c-8486-e17aa8abc6f7-kube-api-access-hvdgs\") pod \"coredns-7c65d6cfc9-9ngv9\" (UID: \"64788c1f-04c1-4e4c-8486-e17aa8abc6f7\") " pod="kube-system/coredns-7c65d6cfc9-9ngv9" Jul 6 23:37:32.202224 kubelet[2634]: I0706 23:37:32.201289 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563e0255-c29b-4292-9f17-d6aef8b5cd13-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-shs2j\" (UID: \"563e0255-c29b-4292-9f17-d6aef8b5cd13\") " pod="calico-system/goldmane-58fd7646b9-shs2j" Jul 6 23:37:32.202224 kubelet[2634]: I0706 23:37:32.201305 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b41952be-a6bf-4a79-b50e-e43b4dd09769-calico-apiserver-certs\") pod \"calico-apiserver-786b9888cb-v8zz6\" (UID: \"b41952be-a6bf-4a79-b50e-e43b4dd09769\") " pod="calico-apiserver/calico-apiserver-786b9888cb-v8zz6" Jul 6 23:37:32.202224 kubelet[2634]: I0706 23:37:32.201325 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563e0255-c29b-4292-9f17-d6aef8b5cd13-config\") pod \"goldmane-58fd7646b9-shs2j\" (UID: \"563e0255-c29b-4292-9f17-d6aef8b5cd13\") " pod="calico-system/goldmane-58fd7646b9-shs2j" Jul 6 23:37:32.202327 kubelet[2634]: I0706 23:37:32.201340 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d06d505-6876-4335-b79e-3060be27b430-config-volume\") pod \"coredns-7c65d6cfc9-bk6h2\" (UID: \"8d06d505-6876-4335-b79e-3060be27b430\") " pod="kube-system/coredns-7c65d6cfc9-bk6h2" Jul 6 23:37:32.202327 kubelet[2634]: I0706 23:37:32.201358 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1323ebd3-6c31-4106-8489-61ffc77fae0f-calico-apiserver-certs\") pod \"calico-apiserver-786b9888cb-8f2tc\" (UID: \"1323ebd3-6c31-4106-8489-61ffc77fae0f\") " pod="calico-apiserver/calico-apiserver-786b9888cb-8f2tc" Jul 6 23:37:32.202327 kubelet[2634]: I0706 23:37:32.201373 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc7n5\" (UniqueName: \"kubernetes.io/projected/563e0255-c29b-4292-9f17-d6aef8b5cd13-kube-api-access-vc7n5\") pod \"goldmane-58fd7646b9-shs2j\" (UID: \"563e0255-c29b-4292-9f17-d6aef8b5cd13\") " pod="calico-system/goldmane-58fd7646b9-shs2j" Jul 6 23:37:32.202327 kubelet[2634]: I0706 23:37:32.201449 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b7190a-9678-4dec-abee-43c139b7b0b4-tigera-ca-bundle\") pod \"calico-kube-controllers-55c6c57ffb-d8d7b\" (UID: \"b1b7190a-9678-4dec-abee-43c139b7b0b4\") " pod="calico-system/calico-kube-controllers-55c6c57ffb-d8d7b" Jul 6 23:37:32.202327 kubelet[2634]: I0706 23:37:32.201473 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhvgn\" (UniqueName: \"kubernetes.io/projected/1323ebd3-6c31-4106-8489-61ffc77fae0f-kube-api-access-vhvgn\") pod \"calico-apiserver-786b9888cb-8f2tc\" (UID: \"1323ebd3-6c31-4106-8489-61ffc77fae0f\") " pod="calico-apiserver/calico-apiserver-786b9888cb-8f2tc" Jul 6 23:37:32.202443 kubelet[2634]: I0706 23:37:32.201492 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt29n\" (UniqueName: \"kubernetes.io/projected/b1b7190a-9678-4dec-abee-43c139b7b0b4-kube-api-access-jt29n\") pod \"calico-kube-controllers-55c6c57ffb-d8d7b\" (UID: \"b1b7190a-9678-4dec-abee-43c139b7b0b4\") " pod="calico-system/calico-kube-controllers-55c6c57ffb-d8d7b" Jul 6 23:37:32.202443 kubelet[2634]: I0706 23:37:32.201551 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cpqx\" (UniqueName: \"kubernetes.io/projected/552500ee-cc5f-44cf-95f3-d22a77a37375-kube-api-access-5cpqx\") pod \"whisker-5f94dd8485-cxgxl\" (UID: \"552500ee-cc5f-44cf-95f3-d22a77a37375\") " pod="calico-system/whisker-5f94dd8485-cxgxl" Jul 6 23:37:32.373666 containerd[1520]: time="2025-07-06T23:37:32.373627412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:37:32.407697 containerd[1520]: time="2025-07-06T23:37:32.407646323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786b9888cb-8f2tc,Uid:1323ebd3-6c31-4106-8489-61ffc77fae0f,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:37:32.420548 containerd[1520]: time="2025-07-06T23:37:32.420504683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-shs2j,Uid:563e0255-c29b-4292-9f17-d6aef8b5cd13,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:32.435411 containerd[1520]: time="2025-07-06T23:37:32.430988706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c6c57ffb-d8d7b,Uid:b1b7190a-9678-4dec-abee-43c139b7b0b4,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:32.442287 containerd[1520]: time="2025-07-06T23:37:32.438943336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9ngv9,Uid:64788c1f-04c1-4e4c-8486-e17aa8abc6f7,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:32.445826 containerd[1520]: time="2025-07-06T23:37:32.445786587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk6h2,Uid:8d06d505-6876-4335-b79e-3060be27b430,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:32.455012 containerd[1520]: time="2025-07-06T23:37:32.454334010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786b9888cb-v8zz6,Uid:b41952be-a6bf-4a79-b50e-e43b4dd09769,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:37:32.468748 containerd[1520]: time="2025-07-06T23:37:32.462275157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f94dd8485-cxgxl,Uid:552500ee-cc5f-44cf-95f3-d22a77a37375,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:32.997079 containerd[1520]: time="2025-07-06T23:37:32.997022023Z" level=error msg="Failed to destroy network for sandbox \"f64eccec4830b2f873254956e47c4c4cf4c62105ffe7124c21655051d7f19475\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:32.998515 containerd[1520]: time="2025-07-06T23:37:32.998462802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786b9888cb-8f2tc,Uid:1323ebd3-6c31-4106-8489-61ffc77fae0f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64eccec4830b2f873254956e47c4c4cf4c62105ffe7124c21655051d7f19475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.000948 containerd[1520]: time="2025-07-06T23:37:33.000901625Z" level=error msg="Failed to destroy network for sandbox \"46d9e47c889dcedf7a7aa86e22fdfdf9467461468ec6f3d7c3b0ca3f22fd6b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.001526 kubelet[2634]: E0706 23:37:33.001472 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64eccec4830b2f873254956e47c4c4cf4c62105ffe7124c21655051d7f19475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.001823 kubelet[2634]: E0706 23:37:33.001578 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64eccec4830b2f873254956e47c4c4cf4c62105ffe7124c21655051d7f19475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786b9888cb-8f2tc" Jul 6 23:37:33.001823 kubelet[2634]: E0706 23:37:33.001600 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64eccec4830b2f873254956e47c4c4cf4c62105ffe7124c21655051d7f19475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786b9888cb-8f2tc" Jul 6 23:37:33.001823 kubelet[2634]: E0706 23:37:33.001661 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-786b9888cb-8f2tc_calico-apiserver(1323ebd3-6c31-4106-8489-61ffc77fae0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-786b9888cb-8f2tc_calico-apiserver(1323ebd3-6c31-4106-8489-61ffc77fae0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f64eccec4830b2f873254956e47c4c4cf4c62105ffe7124c21655051d7f19475\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-786b9888cb-8f2tc" podUID="1323ebd3-6c31-4106-8489-61ffc77fae0f" Jul 6 23:37:33.003050 containerd[1520]: time="2025-07-06T23:37:33.002999916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk6h2,Uid:8d06d505-6876-4335-b79e-3060be27b430,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d9e47c889dcedf7a7aa86e22fdfdf9467461468ec6f3d7c3b0ca3f22fd6b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.003509 kubelet[2634]: E0706 23:37:33.003250 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d9e47c889dcedf7a7aa86e22fdfdf9467461468ec6f3d7c3b0ca3f22fd6b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.003509 kubelet[2634]: E0706 23:37:33.003338 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d9e47c889dcedf7a7aa86e22fdfdf9467461468ec6f3d7c3b0ca3f22fd6b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bk6h2" Jul 6 23:37:33.003509 kubelet[2634]: E0706 23:37:33.003365 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d9e47c889dcedf7a7aa86e22fdfdf9467461468ec6f3d7c3b0ca3f22fd6b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bk6h2" Jul 6 23:37:33.003838 kubelet[2634]: E0706 23:37:33.003620 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bk6h2_kube-system(8d06d505-6876-4335-b79e-3060be27b430)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bk6h2_kube-system(8d06d505-6876-4335-b79e-3060be27b430)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46d9e47c889dcedf7a7aa86e22fdfdf9467461468ec6f3d7c3b0ca3f22fd6b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bk6h2" podUID="8d06d505-6876-4335-b79e-3060be27b430" Jul 6 23:37:33.017085 containerd[1520]: time="2025-07-06T23:37:33.017031189Z" level=error msg="Failed to destroy network for sandbox \"168f42d5d03c7a38f15c974a8920c1af6be573a195a3ae2d56d3c8a5a65f4ac3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.020557 containerd[1520]: time="2025-07-06T23:37:33.020473959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c6c57ffb-d8d7b,Uid:b1b7190a-9678-4dec-abee-43c139b7b0b4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"168f42d5d03c7a38f15c974a8920c1af6be573a195a3ae2d56d3c8a5a65f4ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.020958 kubelet[2634]: E0706 23:37:33.020806 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168f42d5d03c7a38f15c974a8920c1af6be573a195a3ae2d56d3c8a5a65f4ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.021401 kubelet[2634]: E0706 23:37:33.021066 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168f42d5d03c7a38f15c974a8920c1af6be573a195a3ae2d56d3c8a5a65f4ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c6c57ffb-d8d7b" Jul 6 23:37:33.021401 kubelet[2634]: E0706 23:37:33.021092 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168f42d5d03c7a38f15c974a8920c1af6be573a195a3ae2d56d3c8a5a65f4ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c6c57ffb-d8d7b" Jul 6 23:37:33.021401 kubelet[2634]: E0706 23:37:33.021149 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55c6c57ffb-d8d7b_calico-system(b1b7190a-9678-4dec-abee-43c139b7b0b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55c6c57ffb-d8d7b_calico-system(b1b7190a-9678-4dec-abee-43c139b7b0b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"168f42d5d03c7a38f15c974a8920c1af6be573a195a3ae2d56d3c8a5a65f4ac3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55c6c57ffb-d8d7b" podUID="b1b7190a-9678-4dec-abee-43c139b7b0b4" Jul 6 23:37:33.022614 containerd[1520]: time="2025-07-06T23:37:33.022575930Z" level=error msg="Failed to destroy network for sandbox \"0fea4f5a52139079c14f76ffd7b8c448076962b487aa95e173fb2ff066956954\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.024362 containerd[1520]: time="2025-07-06T23:37:33.024249770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786b9888cb-v8zz6,Uid:b41952be-a6bf-4a79-b50e-e43b4dd09769,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fea4f5a52139079c14f76ffd7b8c448076962b487aa95e173fb2ff066956954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.024362 containerd[1520]: time="2025-07-06T23:37:33.024287894Z" level=error msg="Failed to destroy network for sandbox \"9f530d6021a2449ad36be80063d4805286e05aeac55e627c4898a9d109205017\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.024762 kubelet[2634]: E0706 23:37:33.024554 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fea4f5a52139079c14f76ffd7b8c448076962b487aa95e173fb2ff066956954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.024762 kubelet[2634]: E0706 23:37:33.024623 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fea4f5a52139079c14f76ffd7b8c448076962b487aa95e173fb2ff066956954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786b9888cb-v8zz6" Jul 6 23:37:33.024762 kubelet[2634]: E0706 23:37:33.024646 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fea4f5a52139079c14f76ffd7b8c448076962b487aa95e173fb2ff066956954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786b9888cb-v8zz6" Jul 6 23:37:33.024870 kubelet[2634]: E0706 23:37:33.024691 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-786b9888cb-v8zz6_calico-apiserver(b41952be-a6bf-4a79-b50e-e43b4dd09769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-786b9888cb-v8zz6_calico-apiserver(b41952be-a6bf-4a79-b50e-e43b4dd09769)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fea4f5a52139079c14f76ffd7b8c448076962b487aa95e173fb2ff066956954\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-786b9888cb-v8zz6" podUID="b41952be-a6bf-4a79-b50e-e43b4dd09769" Jul 6 23:37:33.025290 containerd[1520]: time="2025-07-06T23:37:33.025230207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9ngv9,Uid:64788c1f-04c1-4e4c-8486-e17aa8abc6f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f530d6021a2449ad36be80063d4805286e05aeac55e627c4898a9d109205017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.026110 kubelet[2634]: E0706 23:37:33.025599 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f530d6021a2449ad36be80063d4805286e05aeac55e627c4898a9d109205017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.026110 kubelet[2634]: E0706 23:37:33.025643 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f530d6021a2449ad36be80063d4805286e05aeac55e627c4898a9d109205017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9ngv9" Jul 6 23:37:33.026110 kubelet[2634]: E0706 23:37:33.025662 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f530d6021a2449ad36be80063d4805286e05aeac55e627c4898a9d109205017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9ngv9" Jul 6 23:37:33.026201 kubelet[2634]: E0706 23:37:33.025698 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9ngv9_kube-system(64788c1f-04c1-4e4c-8486-e17aa8abc6f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9ngv9_kube-system(64788c1f-04c1-4e4c-8486-e17aa8abc6f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f530d6021a2449ad36be80063d4805286e05aeac55e627c4898a9d109205017\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9ngv9" podUID="64788c1f-04c1-4e4c-8486-e17aa8abc6f7" Jul 6 23:37:33.027444 containerd[1520]: time="2025-07-06T23:37:33.027337578Z" level=error msg="Failed to destroy network for sandbox \"1e1454d632e1d14a89e847756a9fb6e4ea1f9f071c635905ae1044b404c9ae24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.028342 containerd[1520]: time="2025-07-06T23:37:33.028291692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-shs2j,Uid:563e0255-c29b-4292-9f17-d6aef8b5cd13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1454d632e1d14a89e847756a9fb6e4ea1f9f071c635905ae1044b404c9ae24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.028575 kubelet[2634]: E0706 23:37:33.028521 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1454d632e1d14a89e847756a9fb6e4ea1f9f071c635905ae1044b404c9ae24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.028636 kubelet[2634]: E0706 23:37:33.028584 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1454d632e1d14a89e847756a9fb6e4ea1f9f071c635905ae1044b404c9ae24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-shs2j" Jul 6 23:37:33.028659 kubelet[2634]: E0706 23:37:33.028601 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1454d632e1d14a89e847756a9fb6e4ea1f9f071c635905ae1044b404c9ae24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-shs2j" Jul 6 23:37:33.028707 kubelet[2634]: E0706 23:37:33.028670 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-shs2j_calico-system(563e0255-c29b-4292-9f17-d6aef8b5cd13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-shs2j_calico-system(563e0255-c29b-4292-9f17-d6aef8b5cd13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e1454d632e1d14a89e847756a9fb6e4ea1f9f071c635905ae1044b404c9ae24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-shs2j" podUID="563e0255-c29b-4292-9f17-d6aef8b5cd13" Jul 6 23:37:33.032397 containerd[1520]: time="2025-07-06T23:37:33.032357296Z" level=error msg="Failed to destroy network for sandbox \"c5e89cb9ebcb46044f9bbafeb589bcd59fbf1cd8ac618595d66cda32509b055e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.033365 containerd[1520]: time="2025-07-06T23:37:33.033328132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f94dd8485-cxgxl,Uid:552500ee-cc5f-44cf-95f3-d22a77a37375,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e89cb9ebcb46044f9bbafeb589bcd59fbf1cd8ac618595d66cda32509b055e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.033551 kubelet[2634]: E0706 23:37:33.033508 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e89cb9ebcb46044f9bbafeb589bcd59fbf1cd8ac618595d66cda32509b055e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.033682 kubelet[2634]: E0706 23:37:33.033658 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e89cb9ebcb46044f9bbafeb589bcd59fbf1cd8ac618595d66cda32509b055e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f94dd8485-cxgxl" Jul 6 23:37:33.033738 kubelet[2634]: E0706 23:37:33.033685 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e89cb9ebcb46044f9bbafeb589bcd59fbf1cd8ac618595d66cda32509b055e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f94dd8485-cxgxl" Jul 6 23:37:33.033738 kubelet[2634]: E0706 23:37:33.033718 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f94dd8485-cxgxl_calico-system(552500ee-cc5f-44cf-95f3-d22a77a37375)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f94dd8485-cxgxl_calico-system(552500ee-cc5f-44cf-95f3-d22a77a37375)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5e89cb9ebcb46044f9bbafeb589bcd59fbf1cd8ac618595d66cda32509b055e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f94dd8485-cxgxl" podUID="552500ee-cc5f-44cf-95f3-d22a77a37375" Jul 6 23:37:33.272773 systemd[1]: Created slice kubepods-besteffort-podd990a250_f885_4c68_b48b_89990a4fd720.slice - libcontainer container kubepods-besteffort-podd990a250_f885_4c68_b48b_89990a4fd720.slice. Jul 6 23:37:33.276105 containerd[1520]: time="2025-07-06T23:37:33.276057356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fflkr,Uid:d990a250-f885-4c68-b48b-89990a4fd720,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:33.313166 systemd[1]: run-netns-cni\x2d4a392986\x2dc520\x2d63f4\x2d4785\x2dc2014eaa3fd2.mount: Deactivated successfully. Jul 6 23:37:33.328973 containerd[1520]: time="2025-07-06T23:37:33.328890537Z" level=error msg="Failed to destroy network for sandbox \"4e05e72fb4efed9dfca8efa127b6e0e680fcb6be79afd43ebf66c67836258fb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.331162 systemd[1]: run-netns-cni\x2d18f488f4\x2dedc9\x2d5722\x2dcacf\x2d8206a2ebfa0a.mount: Deactivated successfully. Jul 6 23:37:33.332417 containerd[1520]: time="2025-07-06T23:37:33.331994107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fflkr,Uid:d990a250-f885-4c68-b48b-89990a4fd720,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e05e72fb4efed9dfca8efa127b6e0e680fcb6be79afd43ebf66c67836258fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.332641 kubelet[2634]: E0706 23:37:33.332608 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e05e72fb4efed9dfca8efa127b6e0e680fcb6be79afd43ebf66c67836258fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:37:33.332691 kubelet[2634]: E0706 23:37:33.332664 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e05e72fb4efed9dfca8efa127b6e0e680fcb6be79afd43ebf66c67836258fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fflkr" Jul 6 23:37:33.332723 kubelet[2634]: E0706 23:37:33.332682 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e05e72fb4efed9dfca8efa127b6e0e680fcb6be79afd43ebf66c67836258fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fflkr" Jul 6 23:37:33.332764 kubelet[2634]: E0706 23:37:33.332730 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fflkr_calico-system(d990a250-f885-4c68-b48b-89990a4fd720)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fflkr_calico-system(d990a250-f885-4c68-b48b-89990a4fd720)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e05e72fb4efed9dfca8efa127b6e0e680fcb6be79afd43ebf66c67836258fb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fflkr" podUID="d990a250-f885-4c68-b48b-89990a4fd720" Jul 6 23:37:36.489840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772287698.mount: Deactivated successfully. Jul 6 23:37:36.677215 containerd[1520]: time="2025-07-06T23:37:36.677129301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:36.677895 containerd[1520]: time="2025-07-06T23:37:36.677842737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:37:36.682868 containerd[1520]: time="2025-07-06T23:37:36.682803981Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:36.684721 containerd[1520]: time="2025-07-06T23:37:36.684671219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:36.685323 containerd[1520]: time="2025-07-06T23:37:36.685114226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.311445048s" Jul 6 23:37:36.685323 containerd[1520]: time="2025-07-06T23:37:36.685146589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:37:36.706692 containerd[1520]: time="2025-07-06T23:37:36.706643702Z" level=info msg="CreateContainer within sandbox \"712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:37:36.713771 containerd[1520]: time="2025-07-06T23:37:36.713722730Z" level=info msg="Container 60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:36.730317 containerd[1520]: time="2025-07-06T23:37:36.730272480Z" level=info msg="CreateContainer within sandbox \"712302d8c77c0f7983305886eb498324e0c13733332f91e55972bc6800dff190\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75\"" Jul 6 23:37:36.730791 containerd[1520]: time="2025-07-06T23:37:36.730768612Z" level=info msg="StartContainer for \"60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75\"" Jul 6 23:37:36.732752 containerd[1520]: time="2025-07-06T23:37:36.732716818Z" level=info msg="connecting to shim 60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75" address="unix:///run/containerd/s/926438006c31733938a9532a85db33e5629db637f01137366a401bfc52df4c3e" protocol=ttrpc version=3 Jul 6 23:37:36.754113 systemd[1]: Started cri-containerd-60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75.scope - libcontainer container 60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75. Jul 6 23:37:36.795560 containerd[1520]: time="2025-07-06T23:37:36.795452850Z" level=info msg="StartContainer for \"60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75\" returns successfully" Jul 6 23:37:37.046501 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:37:37.046669 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:37:37.255704 kubelet[2634]: I0706 23:37:37.255662 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552500ee-cc5f-44cf-95f3-d22a77a37375-whisker-ca-bundle\") pod \"552500ee-cc5f-44cf-95f3-d22a77a37375\" (UID: \"552500ee-cc5f-44cf-95f3-d22a77a37375\") " Jul 6 23:37:37.256429 kubelet[2634]: I0706 23:37:37.256146 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/552500ee-cc5f-44cf-95f3-d22a77a37375-whisker-backend-key-pair\") pod \"552500ee-cc5f-44cf-95f3-d22a77a37375\" (UID: \"552500ee-cc5f-44cf-95f3-d22a77a37375\") " Jul 6 23:37:37.256429 kubelet[2634]: I0706 23:37:37.256178 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cpqx\" (UniqueName: \"kubernetes.io/projected/552500ee-cc5f-44cf-95f3-d22a77a37375-kube-api-access-5cpqx\") pod \"552500ee-cc5f-44cf-95f3-d22a77a37375\" (UID: \"552500ee-cc5f-44cf-95f3-d22a77a37375\") " Jul 6 23:37:37.262676 kubelet[2634]: I0706 23:37:37.262355 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552500ee-cc5f-44cf-95f3-d22a77a37375-kube-api-access-5cpqx" (OuterVolumeSpecName: "kube-api-access-5cpqx") pod "552500ee-cc5f-44cf-95f3-d22a77a37375" (UID: "552500ee-cc5f-44cf-95f3-d22a77a37375"). InnerVolumeSpecName "kube-api-access-5cpqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:37:37.263875 kubelet[2634]: I0706 23:37:37.263831 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/552500ee-cc5f-44cf-95f3-d22a77a37375-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "552500ee-cc5f-44cf-95f3-d22a77a37375" (UID: "552500ee-cc5f-44cf-95f3-d22a77a37375"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:37:37.267737 kubelet[2634]: I0706 23:37:37.267683 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552500ee-cc5f-44cf-95f3-d22a77a37375-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "552500ee-cc5f-44cf-95f3-d22a77a37375" (UID: "552500ee-cc5f-44cf-95f3-d22a77a37375"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:37:37.356501 kubelet[2634]: I0706 23:37:37.356464 2634 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cpqx\" (UniqueName: \"kubernetes.io/projected/552500ee-cc5f-44cf-95f3-d22a77a37375-kube-api-access-5cpqx\") on node \"localhost\" DevicePath \"\"" Jul 6 23:37:37.356501 kubelet[2634]: I0706 23:37:37.356498 2634 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/552500ee-cc5f-44cf-95f3-d22a77a37375-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 6 23:37:37.356501 kubelet[2634]: I0706 23:37:37.356509 2634 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552500ee-cc5f-44cf-95f3-d22a77a37375-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 6 23:37:37.420365 systemd[1]: Removed slice kubepods-besteffort-pod552500ee_cc5f_44cf_95f3_d22a77a37375.slice - libcontainer container kubepods-besteffort-pod552500ee_cc5f_44cf_95f3_d22a77a37375.slice. Jul 6 23:37:37.447408 kubelet[2634]: I0706 23:37:37.447343 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jj67s" podStartSLOduration=2.358711733 podStartE2EDuration="13.447325598s" podCreationTimestamp="2025-07-06 23:37:24 +0000 UTC" firstStartedPulling="2025-07-06 23:37:25.59734769 +0000 UTC m=+17.420746404" lastFinishedPulling="2025-07-06 23:37:36.685961555 +0000 UTC m=+28.509360269" observedRunningTime="2025-07-06 23:37:37.446861511 +0000 UTC m=+29.270260225" watchObservedRunningTime="2025-07-06 23:37:37.447325598 +0000 UTC m=+29.270724312" Jul 6 23:37:37.490794 systemd[1]: var-lib-kubelet-pods-552500ee\x2dcc5f\x2d44cf\x2d95f3\x2dd22a77a37375-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:37:37.490892 systemd[1]: var-lib-kubelet-pods-552500ee\x2dcc5f\x2d44cf\x2d95f3\x2dd22a77a37375-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5cpqx.mount: Deactivated successfully. Jul 6 23:37:37.526025 systemd[1]: Created slice kubepods-besteffort-pod5efefbae_fd0d_4281_92c2_babf10803b43.slice - libcontainer container kubepods-besteffort-pod5efefbae_fd0d_4281_92c2_babf10803b43.slice. Jul 6 23:37:37.581510 containerd[1520]: time="2025-07-06T23:37:37.581425284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75\" id:\"fe7a8458e846f7bd933bf7f3613eb6b65d109dcfd7795a8326c6f869fd6ddedd\" pid:3731 exit_status:1 exited_at:{seconds:1751845057 nanos:581091330}" Jul 6 23:37:37.658492 kubelet[2634]: I0706 23:37:37.658411 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzdk4\" (UniqueName: \"kubernetes.io/projected/5efefbae-fd0d-4281-92c2-babf10803b43-kube-api-access-qzdk4\") pod \"whisker-cbb44fcf9-r9lqr\" (UID: \"5efefbae-fd0d-4281-92c2-babf10803b43\") " pod="calico-system/whisker-cbb44fcf9-r9lqr" Jul 6 23:37:37.658750 kubelet[2634]: I0706 23:37:37.658645 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5efefbae-fd0d-4281-92c2-babf10803b43-whisker-backend-key-pair\") pod \"whisker-cbb44fcf9-r9lqr\" (UID: \"5efefbae-fd0d-4281-92c2-babf10803b43\") " pod="calico-system/whisker-cbb44fcf9-r9lqr" Jul 6 23:37:37.658750 kubelet[2634]: I0706 23:37:37.658689 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efefbae-fd0d-4281-92c2-babf10803b43-whisker-ca-bundle\") pod \"whisker-cbb44fcf9-r9lqr\" (UID: \"5efefbae-fd0d-4281-92c2-babf10803b43\") " pod="calico-system/whisker-cbb44fcf9-r9lqr" Jul 6 23:37:37.683154 containerd[1520]: time="2025-07-06T23:37:37.683112152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75\" id:\"c67f48a58064e308f865396ca59807cbbffb589c15883b4b17fb6755f8030d70\" pid:3757 exit_status:1 exited_at:{seconds:1751845057 nanos:682734794}" Jul 6 23:37:37.847458 containerd[1520]: time="2025-07-06T23:37:37.847123482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbb44fcf9-r9lqr,Uid:5efefbae-fd0d-4281-92c2-babf10803b43,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:38.162822 systemd-networkd[1428]: cali2bb7d8398cb: Link UP Jul 6 23:37:38.164102 systemd-networkd[1428]: cali2bb7d8398cb: Gained carrier Jul 6 23:37:38.185922 containerd[1520]: 2025-07-06 23:37:37.874 [INFO][3771] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:37:38.185922 containerd[1520]: 2025-07-06 23:37:37.941 [INFO][3771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0 whisker-cbb44fcf9- calico-system 5efefbae-fd0d-4281-92c2-babf10803b43 875 0 2025-07-06 23:37:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cbb44fcf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cbb44fcf9-r9lqr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2bb7d8398cb [] [] }} ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Namespace="calico-system" Pod="whisker-cbb44fcf9-r9lqr" WorkloadEndpoint="localhost-k8s-whisker--cbb44fcf9--r9lqr-" Jul 6 23:37:38.185922 containerd[1520]: 2025-07-06 23:37:37.941 [INFO][3771] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Namespace="calico-system" Pod="whisker-cbb44fcf9-r9lqr" WorkloadEndpoint="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" Jul 6 23:37:38.185922 containerd[1520]: 2025-07-06 23:37:38.061 [INFO][3787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" HandleID="k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Workload="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.062 [INFO][3787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" HandleID="k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Workload="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000185740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cbb44fcf9-r9lqr", "timestamp":"2025-07-06 23:37:38.061821503 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.062 [INFO][3787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.062 [INFO][3787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.062 [INFO][3787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.085 [INFO][3787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" host="localhost" Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.103 [INFO][3787] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.109 [INFO][3787] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.112 [INFO][3787] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.124 [INFO][3787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:38.186248 containerd[1520]: 2025-07-06 23:37:38.124 [INFO][3787] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" host="localhost" Jul 6 23:37:38.186481 containerd[1520]: 2025-07-06 23:37:38.126 [INFO][3787] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee Jul 6 23:37:38.186481 containerd[1520]: 2025-07-06 23:37:38.132 [INFO][3787] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" host="localhost" Jul 6 23:37:38.186481 containerd[1520]: 2025-07-06 23:37:38.140 [INFO][3787] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" host="localhost" Jul 6 23:37:38.186481 containerd[1520]: 2025-07-06 23:37:38.140 [INFO][3787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" host="localhost" Jul 6 23:37:38.186481 containerd[1520]: 2025-07-06 23:37:38.140 [INFO][3787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:37:38.186481 containerd[1520]: 2025-07-06 23:37:38.140 [INFO][3787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" HandleID="k8s-pod-network.255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Workload="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" Jul 6 23:37:38.186612 containerd[1520]: 2025-07-06 23:37:38.143 [INFO][3771] cni-plugin/k8s.go 418: Populated endpoint ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Namespace="calico-system" Pod="whisker-cbb44fcf9-r9lqr" WorkloadEndpoint="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0", GenerateName:"whisker-cbb44fcf9-", Namespace:"calico-system", SelfLink:"", UID:"5efefbae-fd0d-4281-92c2-babf10803b43", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cbb44fcf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cbb44fcf9-r9lqr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2bb7d8398cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:38.186612 containerd[1520]: 2025-07-06 23:37:38.143 [INFO][3771] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Namespace="calico-system" Pod="whisker-cbb44fcf9-r9lqr" WorkloadEndpoint="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" Jul 6 23:37:38.186683 containerd[1520]: 2025-07-06 23:37:38.146 [INFO][3771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bb7d8398cb ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Namespace="calico-system" Pod="whisker-cbb44fcf9-r9lqr" WorkloadEndpoint="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" Jul 6 23:37:38.186683 containerd[1520]: 2025-07-06 23:37:38.164 [INFO][3771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Namespace="calico-system" Pod="whisker-cbb44fcf9-r9lqr" WorkloadEndpoint="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" Jul 6 23:37:38.186727 containerd[1520]: 2025-07-06 23:37:38.165 [INFO][3771] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Namespace="calico-system" Pod="whisker-cbb44fcf9-r9lqr" WorkloadEndpoint="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0", GenerateName:"whisker-cbb44fcf9-", Namespace:"calico-system", SelfLink:"", UID:"5efefbae-fd0d-4281-92c2-babf10803b43", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cbb44fcf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee", Pod:"whisker-cbb44fcf9-r9lqr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2bb7d8398cb", MAC:"aa:0e:bb:b9:cf:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:38.186774 containerd[1520]: 2025-07-06 23:37:38.183 [INFO][3771] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" Namespace="calico-system" Pod="whisker-cbb44fcf9-r9lqr" WorkloadEndpoint="localhost-k8s-whisker--cbb44fcf9--r9lqr-eth0" Jul 6 23:37:38.297393 kubelet[2634]: I0706 23:37:38.297186 2634 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552500ee-cc5f-44cf-95f3-d22a77a37375" path="/var/lib/kubelet/pods/552500ee-cc5f-44cf-95f3-d22a77a37375/volumes" Jul 6 23:37:38.365697 containerd[1520]: time="2025-07-06T23:37:38.365641893Z" level=info msg="connecting to shim 255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee" address="unix:///run/containerd/s/e9daff528e7879b35196c1c5aa2afa95b06e5c4a28270affa98ea5759d0e44d5" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:38.397096 systemd[1]: Started cri-containerd-255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee.scope - libcontainer container 255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee. Jul 6 23:37:38.409530 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:37:38.470925 containerd[1520]: time="2025-07-06T23:37:38.470224947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbb44fcf9-r9lqr,Uid:5efefbae-fd0d-4281-92c2-babf10803b43,Namespace:calico-system,Attempt:0,} returns sandbox id \"255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee\"" Jul 6 23:37:38.472279 containerd[1520]: time="2025-07-06T23:37:38.472240225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:37:38.641681 containerd[1520]: time="2025-07-06T23:37:38.641622033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75\" id:\"c102e1e8b5832d1371a09fa03a75f27d1d67ebdc08e6168f9e4348a69041645b\" pid:3858 exit_status:1 exited_at:{seconds:1751845058 nanos:640802473}" Jul 6 23:37:39.488376 containerd[1520]: time="2025-07-06T23:37:39.488324756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75\" id:\"ac943e0ea850e0662085df445866ff268e97ef99ec62b8c2da17d7993d5e9dc3\" pid:3989 exit_status:1 exited_at:{seconds:1751845059 nanos:487967522}" Jul 6 23:37:39.737955 containerd[1520]: time="2025-07-06T23:37:39.737878356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:39.743450 containerd[1520]: time="2025-07-06T23:37:39.741035015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:37:39.745274 containerd[1520]: time="2025-07-06T23:37:39.745232812Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:39.751623 containerd[1520]: time="2025-07-06T23:37:39.751552130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:39.752394 containerd[1520]: time="2025-07-06T23:37:39.752356846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.280074817s" Jul 6 23:37:39.752450 containerd[1520]: time="2025-07-06T23:37:39.752393129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:37:39.755197 containerd[1520]: time="2025-07-06T23:37:39.755156990Z" level=info msg="CreateContainer within sandbox \"255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:37:39.806322 containerd[1520]: time="2025-07-06T23:37:39.806267624Z" level=info msg="Container 680fd1c6b3d2fc58c54e1bdbc87a8d4ca9f06761118e8e5796ec64b765e63209: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:39.856623 containerd[1520]: time="2025-07-06T23:37:39.856568581Z" level=info msg="CreateContainer within sandbox \"255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"680fd1c6b3d2fc58c54e1bdbc87a8d4ca9f06761118e8e5796ec64b765e63209\"" Jul 6 23:37:39.857182 containerd[1520]: time="2025-07-06T23:37:39.857158877Z" level=info msg="StartContainer for \"680fd1c6b3d2fc58c54e1bdbc87a8d4ca9f06761118e8e5796ec64b765e63209\"" Jul 6 23:37:39.859669 containerd[1520]: time="2025-07-06T23:37:39.859627670Z" level=info msg="connecting to shim 680fd1c6b3d2fc58c54e1bdbc87a8d4ca9f06761118e8e5796ec64b765e63209" address="unix:///run/containerd/s/e9daff528e7879b35196c1c5aa2afa95b06e5c4a28270affa98ea5759d0e44d5" protocol=ttrpc version=3 Jul 6 23:37:39.860343 systemd-networkd[1428]: cali2bb7d8398cb: Gained IPv6LL Jul 6 23:37:39.883116 systemd[1]: Started cri-containerd-680fd1c6b3d2fc58c54e1bdbc87a8d4ca9f06761118e8e5796ec64b765e63209.scope - libcontainer container 680fd1c6b3d2fc58c54e1bdbc87a8d4ca9f06761118e8e5796ec64b765e63209. Jul 6 23:37:39.919048 containerd[1520]: time="2025-07-06T23:37:39.918983484Z" level=info msg="StartContainer for \"680fd1c6b3d2fc58c54e1bdbc87a8d4ca9f06761118e8e5796ec64b765e63209\" returns successfully" Jul 6 23:37:39.920245 containerd[1520]: time="2025-07-06T23:37:39.920218281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:37:41.355260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678112700.mount: Deactivated successfully. Jul 6 23:37:41.405602 containerd[1520]: time="2025-07-06T23:37:41.404830147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:41.405602 containerd[1520]: time="2025-07-06T23:37:41.405422239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:37:41.406271 containerd[1520]: time="2025-07-06T23:37:41.406237511Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:41.408898 containerd[1520]: time="2025-07-06T23:37:41.408860822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:41.409444 containerd[1520]: time="2025-07-06T23:37:41.409406991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.489149627s" Jul 6 23:37:41.409496 containerd[1520]: time="2025-07-06T23:37:41.409442314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:37:41.423551 containerd[1520]: time="2025-07-06T23:37:41.423490473Z" level=info msg="CreateContainer within sandbox \"255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:37:41.432249 containerd[1520]: time="2025-07-06T23:37:41.432034147Z" level=info msg="Container 9c5fdd41346b96b2dd1643b0c7e7e1f6905e11f062bd07f36e350b8545078449: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:41.442829 containerd[1520]: time="2025-07-06T23:37:41.442786456Z" level=info msg="CreateContainer within sandbox \"255da158107187a226f2829379c53ef918fe70c6afa1f4521657f67fc67bfaee\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9c5fdd41346b96b2dd1643b0c7e7e1f6905e11f062bd07f36e350b8545078449\"" Jul 6 23:37:41.444689 containerd[1520]: time="2025-07-06T23:37:41.443447395Z" level=info msg="StartContainer for \"9c5fdd41346b96b2dd1643b0c7e7e1f6905e11f062bd07f36e350b8545078449\"" Jul 6 23:37:41.444689 containerd[1520]: time="2025-07-06T23:37:41.444566854Z" level=info msg="connecting to shim 9c5fdd41346b96b2dd1643b0c7e7e1f6905e11f062bd07f36e350b8545078449" address="unix:///run/containerd/s/e9daff528e7879b35196c1c5aa2afa95b06e5c4a28270affa98ea5759d0e44d5" protocol=ttrpc version=3 Jul 6 23:37:41.473122 systemd[1]: Started cri-containerd-9c5fdd41346b96b2dd1643b0c7e7e1f6905e11f062bd07f36e350b8545078449.scope - libcontainer container 9c5fdd41346b96b2dd1643b0c7e7e1f6905e11f062bd07f36e350b8545078449. Jul 6 23:37:41.552734 containerd[1520]: time="2025-07-06T23:37:41.552528341Z" level=info msg="StartContainer for \"9c5fdd41346b96b2dd1643b0c7e7e1f6905e11f062bd07f36e350b8545078449\" returns successfully" Jul 6 23:37:42.107521 kubelet[2634]: I0706 23:37:42.107473 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:37:42.438409 kubelet[2634]: I0706 23:37:42.438259 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cbb44fcf9-r9lqr" podStartSLOduration=2.4892633220000002 podStartE2EDuration="5.438227335s" podCreationTimestamp="2025-07-06 23:37:37 +0000 UTC" firstStartedPulling="2025-07-06 23:37:38.471648887 +0000 UTC m=+30.295047601" lastFinishedPulling="2025-07-06 23:37:41.4206129 +0000 UTC m=+33.244011614" observedRunningTime="2025-07-06 23:37:42.437261092 +0000 UTC m=+34.260659846" watchObservedRunningTime="2025-07-06 23:37:42.438227335 +0000 UTC m=+34.261626049" Jul 6 23:37:43.037707 systemd-networkd[1428]: vxlan.calico: Link UP Jul 6 23:37:43.037714 systemd-networkd[1428]: vxlan.calico: Gained carrier Jul 6 23:37:43.268988 containerd[1520]: time="2025-07-06T23:37:43.268943387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786b9888cb-8f2tc,Uid:1323ebd3-6c31-4106-8489-61ffc77fae0f,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:37:43.432690 systemd-networkd[1428]: cali6f445ea4ecc: Link UP Jul 6 23:37:43.434104 systemd-networkd[1428]: cali6f445ea4ecc: Gained carrier Jul 6 23:37:43.447249 containerd[1520]: 2025-07-06 23:37:43.356 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0 calico-apiserver-786b9888cb- calico-apiserver 1323ebd3-6c31-4106-8489-61ffc77fae0f 813 0 2025-07-06 23:37:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:786b9888cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-786b9888cb-8f2tc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6f445ea4ecc [] [] }} ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-8f2tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-" Jul 6 23:37:43.447249 containerd[1520]: 2025-07-06 23:37:43.356 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-8f2tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" Jul 6 23:37:43.447249 containerd[1520]: 2025-07-06 23:37:43.390 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" HandleID="k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Workload="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.390 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" HandleID="k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Workload="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-786b9888cb-8f2tc", "timestamp":"2025-07-06 23:37:43.390286102 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.390 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.390 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.390 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.399 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" host="localhost" Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.406 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.411 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.413 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.415 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:43.447491 containerd[1520]: 2025-07-06 23:37:43.415 [INFO][4313] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" host="localhost" Jul 6 23:37:43.448235 containerd[1520]: 2025-07-06 23:37:43.416 [INFO][4313] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f Jul 6 23:37:43.448235 containerd[1520]: 2025-07-06 23:37:43.420 [INFO][4313] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" host="localhost" Jul 6 23:37:43.448235 containerd[1520]: 2025-07-06 23:37:43.425 [INFO][4313] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" host="localhost" Jul 6 23:37:43.448235 containerd[1520]: 2025-07-06 23:37:43.425 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" host="localhost" Jul 6 23:37:43.448235 containerd[1520]: 2025-07-06 23:37:43.425 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:37:43.448235 containerd[1520]: 2025-07-06 23:37:43.425 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" HandleID="k8s-pod-network.31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Workload="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" Jul 6 23:37:43.448657 containerd[1520]: 2025-07-06 23:37:43.428 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-8f2tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0", GenerateName:"calico-apiserver-786b9888cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1323ebd3-6c31-4106-8489-61ffc77fae0f", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786b9888cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-786b9888cb-8f2tc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f445ea4ecc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:43.448721 containerd[1520]: 2025-07-06 23:37:43.429 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-8f2tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" Jul 6 23:37:43.448721 containerd[1520]: 2025-07-06 23:37:43.429 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f445ea4ecc ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-8f2tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" Jul 6 23:37:43.448721 containerd[1520]: 2025-07-06 23:37:43.434 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-8f2tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" Jul 6 23:37:43.449333 containerd[1520]: 2025-07-06 23:37:43.435 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-8f2tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0", GenerateName:"calico-apiserver-786b9888cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1323ebd3-6c31-4106-8489-61ffc77fae0f", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786b9888cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f", Pod:"calico-apiserver-786b9888cb-8f2tc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f445ea4ecc", MAC:"0a:34:6e:73:eb:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:43.450011 containerd[1520]: 2025-07-06 23:37:43.443 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-8f2tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--8f2tc-eth0" Jul 6 23:37:43.470594 containerd[1520]: time="2025-07-06T23:37:43.470542219Z" level=info msg="connecting to shim 31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f" address="unix:///run/containerd/s/f7913017b9df22511fc5f5af13c9c32f8a5771a0ff3be293ef3c416d7753f602" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:43.497115 systemd[1]: Started cri-containerd-31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f.scope - libcontainer container 31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f. Jul 6 23:37:43.508941 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:37:43.528218 containerd[1520]: time="2025-07-06T23:37:43.528174025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786b9888cb-8f2tc,Uid:1323ebd3-6c31-4106-8489-61ffc77fae0f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f\"" Jul 6 23:37:43.529758 containerd[1520]: time="2025-07-06T23:37:43.529729074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:37:44.595084 systemd-networkd[1428]: vxlan.calico: Gained IPv6LL Jul 6 23:37:44.787139 systemd-networkd[1428]: cali6f445ea4ecc: Gained IPv6LL Jul 6 23:37:45.075749 containerd[1520]: time="2025-07-06T23:37:45.075692803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:45.076306 containerd[1520]: time="2025-07-06T23:37:45.076274928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:37:45.077060 containerd[1520]: time="2025-07-06T23:37:45.077029987Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:45.078970 containerd[1520]: time="2025-07-06T23:37:45.078939856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:45.079602 containerd[1520]: time="2025-07-06T23:37:45.079566624Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.549808788s" Jul 6 23:37:45.079602 containerd[1520]: time="2025-07-06T23:37:45.079601307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:37:45.083717 containerd[1520]: time="2025-07-06T23:37:45.083685825Z" level=info msg="CreateContainer within sandbox \"31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:37:45.091951 containerd[1520]: time="2025-07-06T23:37:45.090267297Z" level=info msg="Container df0d85d45b36272dba0dc4c9a5ac5b660841e8f3abfddc777d649b260e5683aa: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:45.097522 containerd[1520]: time="2025-07-06T23:37:45.097472818Z" level=info msg="CreateContainer within sandbox \"31846eb9c38e891026d369f3f31a1d4c09517fa565c3317b029c502f6477102f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"df0d85d45b36272dba0dc4c9a5ac5b660841e8f3abfddc777d649b260e5683aa\"" Jul 6 23:37:45.098202 containerd[1520]: time="2025-07-06T23:37:45.098089786Z" level=info msg="StartContainer for \"df0d85d45b36272dba0dc4c9a5ac5b660841e8f3abfddc777d649b260e5683aa\"" Jul 6 23:37:45.099614 containerd[1520]: time="2025-07-06T23:37:45.099583422Z" level=info msg="connecting to shim df0d85d45b36272dba0dc4c9a5ac5b660841e8f3abfddc777d649b260e5683aa" address="unix:///run/containerd/s/f7913017b9df22511fc5f5af13c9c32f8a5771a0ff3be293ef3c416d7753f602" protocol=ttrpc version=3 Jul 6 23:37:45.123125 systemd[1]: Started cri-containerd-df0d85d45b36272dba0dc4c9a5ac5b660841e8f3abfddc777d649b260e5683aa.scope - libcontainer container df0d85d45b36272dba0dc4c9a5ac5b660841e8f3abfddc777d649b260e5683aa. Jul 6 23:37:45.171009 containerd[1520]: time="2025-07-06T23:37:45.170962097Z" level=info msg="StartContainer for \"df0d85d45b36272dba0dc4c9a5ac5b660841e8f3abfddc777d649b260e5683aa\" returns successfully" Jul 6 23:37:45.268795 containerd[1520]: time="2025-07-06T23:37:45.268744226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk6h2,Uid:8d06d505-6876-4335-b79e-3060be27b430,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:45.269160 containerd[1520]: time="2025-07-06T23:37:45.268744306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fflkr,Uid:d990a250-f885-4c68-b48b-89990a4fd720,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:45.272456 containerd[1520]: time="2025-07-06T23:37:45.272399150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786b9888cb-v8zz6,Uid:b41952be-a6bf-4a79-b50e-e43b4dd09769,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:37:45.454943 kubelet[2634]: I0706 23:37:45.454783 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-786b9888cb-8f2tc" podStartSLOduration=19.90209993 podStartE2EDuration="21.454750341s" podCreationTimestamp="2025-07-06 23:37:24 +0000 UTC" firstStartedPulling="2025-07-06 23:37:43.529420168 +0000 UTC m=+35.352818882" lastFinishedPulling="2025-07-06 23:37:45.082070619 +0000 UTC m=+36.905469293" observedRunningTime="2025-07-06 23:37:45.453511164 +0000 UTC m=+37.276909958" watchObservedRunningTime="2025-07-06 23:37:45.454750341 +0000 UTC m=+37.278149095" Jul 6 23:37:45.475984 systemd-networkd[1428]: cali097e4aced39: Link UP Jul 6 23:37:45.476307 systemd-networkd[1428]: cali097e4aced39: Gained carrier Jul 6 23:37:45.493855 containerd[1520]: 2025-07-06 23:37:45.379 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fflkr-eth0 csi-node-driver- calico-system d990a250-f885-4c68-b48b-89990a4fd720 677 0 2025-07-06 23:37:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fflkr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali097e4aced39 [] [] }} ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Namespace="calico-system" Pod="csi-node-driver-fflkr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fflkr-" Jul 6 23:37:45.493855 containerd[1520]: 2025-07-06 23:37:45.379 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Namespace="calico-system" Pod="csi-node-driver-fflkr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fflkr-eth0" Jul 6 23:37:45.493855 containerd[1520]: 2025-07-06 23:37:45.419 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" HandleID="k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Workload="localhost-k8s-csi--node--driver--fflkr-eth0" Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.420 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" HandleID="k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Workload="localhost-k8s-csi--node--driver--fflkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000342130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fflkr", "timestamp":"2025-07-06 23:37:45.41978502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.420 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.420 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.420 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.433 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" host="localhost" Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.440 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.447 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.450 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.453 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:45.494170 containerd[1520]: 2025-07-06 23:37:45.453 [INFO][4476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" host="localhost" Jul 6 23:37:45.494377 containerd[1520]: 2025-07-06 23:37:45.456 [INFO][4476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917 Jul 6 23:37:45.494377 containerd[1520]: 2025-07-06 23:37:45.461 [INFO][4476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" host="localhost" Jul 6 23:37:45.494377 containerd[1520]: 2025-07-06 23:37:45.467 [INFO][4476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" host="localhost" Jul 6 23:37:45.494377 containerd[1520]: 2025-07-06 23:37:45.468 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" host="localhost" Jul 6 23:37:45.494377 containerd[1520]: 2025-07-06 23:37:45.468 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:37:45.494377 containerd[1520]: 2025-07-06 23:37:45.468 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" HandleID="k8s-pod-network.89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Workload="localhost-k8s-csi--node--driver--fflkr-eth0" Jul 6 23:37:45.494501 containerd[1520]: 2025-07-06 23:37:45.472 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Namespace="calico-system" Pod="csi-node-driver-fflkr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fflkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fflkr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d990a250-f885-4c68-b48b-89990a4fd720", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fflkr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali097e4aced39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:45.494549 containerd[1520]: 2025-07-06 23:37:45.472 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Namespace="calico-system" Pod="csi-node-driver-fflkr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fflkr-eth0" Jul 6 23:37:45.494549 containerd[1520]: 2025-07-06 23:37:45.472 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali097e4aced39 ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Namespace="calico-system" Pod="csi-node-driver-fflkr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fflkr-eth0" Jul 6 23:37:45.494549 containerd[1520]: 2025-07-06 23:37:45.477 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Namespace="calico-system" Pod="csi-node-driver-fflkr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fflkr-eth0" Jul 6 23:37:45.494615 containerd[1520]: 2025-07-06 23:37:45.478 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Namespace="calico-system" Pod="csi-node-driver-fflkr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fflkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fflkr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d990a250-f885-4c68-b48b-89990a4fd720", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917", Pod:"csi-node-driver-fflkr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali097e4aced39", MAC:"06:08:e6:00:7a:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:45.494661 containerd[1520]: 2025-07-06 23:37:45.489 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" Namespace="calico-system" Pod="csi-node-driver-fflkr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fflkr-eth0" Jul 6 23:37:45.523211 containerd[1520]: time="2025-07-06T23:37:45.523139663Z" level=info msg="connecting to shim 89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917" address="unix:///run/containerd/s/f2724c6d622fef41d4e509fd3979d7279e07072ab4c3705b53a3b911250dda4c" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:45.548086 systemd[1]: Started cri-containerd-89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917.scope - libcontainer container 89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917. Jul 6 23:37:45.564457 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:37:45.584290 systemd-networkd[1428]: cali5cb5767aa29: Link UP Jul 6 23:37:45.584421 systemd-networkd[1428]: cali5cb5767aa29: Gained carrier Jul 6 23:37:45.588935 containerd[1520]: time="2025-07-06T23:37:45.588884699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fflkr,Uid:d990a250-f885-4c68-b48b-89990a4fd720,Namespace:calico-system,Attempt:0,} returns sandbox id \"89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917\"" Jul 6 23:37:45.590926 containerd[1520]: time="2025-07-06T23:37:45.590864293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:37:45.602246 containerd[1520]: 2025-07-06 23:37:45.377 [INFO][4432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0 calico-apiserver-786b9888cb- calico-apiserver b41952be-a6bf-4a79-b50e-e43b4dd09769 816 0 2025-07-06 23:37:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:786b9888cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-786b9888cb-v8zz6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5cb5767aa29 [] [] }} ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-v8zz6" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-" Jul 6 23:37:45.602246 containerd[1520]: 2025-07-06 23:37:45.377 [INFO][4432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-v8zz6" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" Jul 6 23:37:45.602246 containerd[1520]: 2025-07-06 23:37:45.426 [INFO][4468] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" HandleID="k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Workload="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.426 [INFO][4468] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" HandleID="k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Workload="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b74a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-786b9888cb-v8zz6", "timestamp":"2025-07-06 23:37:45.426016785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.426 [INFO][4468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.468 [INFO][4468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.468 [INFO][4468] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.534 [INFO][4468] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" host="localhost" Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.543 [INFO][4468] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.555 [INFO][4468] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.558 [INFO][4468] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.560 [INFO][4468] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:45.602421 containerd[1520]: 2025-07-06 23:37:45.561 [INFO][4468] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" host="localhost" Jul 6 23:37:45.602640 containerd[1520]: 2025-07-06 23:37:45.563 [INFO][4468] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8 Jul 6 23:37:45.602640 containerd[1520]: 2025-07-06 23:37:45.568 [INFO][4468] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" host="localhost" Jul 6 23:37:45.602640 containerd[1520]: 2025-07-06 23:37:45.576 [INFO][4468] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" host="localhost" Jul 6 23:37:45.602640 containerd[1520]: 2025-07-06 23:37:45.576 [INFO][4468] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" host="localhost" Jul 6 23:37:45.602640 containerd[1520]: 2025-07-06 23:37:45.576 [INFO][4468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:37:45.602640 containerd[1520]: 2025-07-06 23:37:45.576 [INFO][4468] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" HandleID="k8s-pod-network.a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Workload="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" Jul 6 23:37:45.602751 containerd[1520]: 2025-07-06 23:37:45.580 [INFO][4432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-v8zz6" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0", GenerateName:"calico-apiserver-786b9888cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b41952be-a6bf-4a79-b50e-e43b4dd09769", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786b9888cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-786b9888cb-v8zz6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5cb5767aa29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:45.602797 containerd[1520]: 2025-07-06 23:37:45.580 [INFO][4432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-v8zz6" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" Jul 6 23:37:45.602797 containerd[1520]: 2025-07-06 23:37:45.580 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cb5767aa29 ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-v8zz6" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" Jul 6 23:37:45.602797 containerd[1520]: 2025-07-06 23:37:45.583 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-v8zz6" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" Jul 6 23:37:45.602866 containerd[1520]: 2025-07-06 23:37:45.585 [INFO][4432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-v8zz6" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0", GenerateName:"calico-apiserver-786b9888cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b41952be-a6bf-4a79-b50e-e43b4dd09769", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786b9888cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8", Pod:"calico-apiserver-786b9888cb-v8zz6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5cb5767aa29", MAC:"e2:6a:ab:50:dc:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:45.602946 containerd[1520]: 2025-07-06 23:37:45.597 [INFO][4432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" Namespace="calico-apiserver" Pod="calico-apiserver-786b9888cb-v8zz6" WorkloadEndpoint="localhost-k8s-calico--apiserver--786b9888cb--v8zz6-eth0" Jul 6 23:37:45.624555 containerd[1520]: time="2025-07-06T23:37:45.624473549Z" level=info msg="connecting to shim a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8" address="unix:///run/containerd/s/bca5a8ec9bbe10f9eb0c34169f1a1cb2df79b7a0ecbdb347914836dcd297823c" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:45.681107 systemd-networkd[1428]: cali15578dc1af8: Link UP Jul 6 23:37:45.681643 systemd-networkd[1428]: cali15578dc1af8: Gained carrier Jul 6 23:37:45.695581 containerd[1520]: 2025-07-06 23:37:45.391 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0 coredns-7c65d6cfc9- kube-system 8d06d505-6876-4335-b79e-3060be27b430 815 0 2025-07-06 23:37:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-bk6h2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali15578dc1af8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk6h2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bk6h2-" Jul 6 23:37:45.695581 containerd[1520]: 2025-07-06 23:37:45.391 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk6h2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" Jul 6 23:37:45.695581 containerd[1520]: 2025-07-06 23:37:45.436 [INFO][4482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" HandleID="k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Workload="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.436 [INFO][4482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" HandleID="k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Workload="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a290), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-bk6h2", "timestamp":"2025-07-06 23:37:45.435984521 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.436 [INFO][4482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.576 [INFO][4482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.576 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.633 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" host="localhost" Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.647 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.654 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.657 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.660 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:45.695815 containerd[1520]: 2025-07-06 23:37:45.660 [INFO][4482] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" host="localhost" Jul 6 23:37:45.696733 containerd[1520]: 2025-07-06 23:37:45.662 [INFO][4482] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88 Jul 6 23:37:45.696733 containerd[1520]: 2025-07-06 23:37:45.667 [INFO][4482] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" host="localhost" Jul 6 23:37:45.696733 containerd[1520]: 2025-07-06 23:37:45.674 [INFO][4482] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" host="localhost" Jul 6 23:37:45.696733 containerd[1520]: 2025-07-06 23:37:45.674 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" host="localhost" Jul 6 23:37:45.696733 containerd[1520]: 2025-07-06 23:37:45.675 [INFO][4482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:37:45.696733 containerd[1520]: 2025-07-06 23:37:45.675 [INFO][4482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" HandleID="k8s-pod-network.0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Workload="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" Jul 6 23:37:45.696840 containerd[1520]: 2025-07-06 23:37:45.678 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk6h2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8d06d505-6876-4335-b79e-3060be27b430", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-bk6h2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali15578dc1af8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:45.697090 containerd[1520]: 2025-07-06 23:37:45.678 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk6h2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" Jul 6 23:37:45.697090 containerd[1520]: 2025-07-06 23:37:45.678 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15578dc1af8 ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk6h2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" Jul 6 23:37:45.697090 containerd[1520]: 2025-07-06 23:37:45.680 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk6h2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" Jul 6 23:37:45.697208 containerd[1520]: 2025-07-06 23:37:45.680 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk6h2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8d06d505-6876-4335-b79e-3060be27b430", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88", Pod:"coredns-7c65d6cfc9-bk6h2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali15578dc1af8", MAC:"3e:bb:1b:22:b8:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:45.697208 containerd[1520]: 2025-07-06 23:37:45.692 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk6h2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bk6h2-eth0" Jul 6 23:37:45.723500 containerd[1520]: time="2025-07-06T23:37:45.723375285Z" level=info msg="connecting to shim 0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88" address="unix:///run/containerd/s/0d69b0cc4c7099e4074e695570f197dea7ea33c0e2dd2125df25a4d4822ea4ff" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:45.724995 systemd[1]: Started cri-containerd-a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8.scope - libcontainer container a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8. Jul 6 23:37:45.763180 systemd[1]: Started cri-containerd-0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88.scope - libcontainer container 0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88. Jul 6 23:37:45.770130 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:37:45.778290 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:37:45.811164 containerd[1520]: time="2025-07-06T23:37:45.811040787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk6h2,Uid:8d06d505-6876-4335-b79e-3060be27b430,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88\"" Jul 6 23:37:45.815178 containerd[1520]: time="2025-07-06T23:37:45.815139106Z" level=info msg="CreateContainer within sandbox \"0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:37:45.826263 containerd[1520]: time="2025-07-06T23:37:45.825974349Z" level=info msg="Container c941edc619e30b2557a12969d69213072703a771d205b33da56b4e4564450024: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:45.827564 containerd[1520]: time="2025-07-06T23:37:45.827538191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786b9888cb-v8zz6,Uid:b41952be-a6bf-4a79-b50e-e43b4dd09769,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8\"" Jul 6 23:37:45.831475 containerd[1520]: time="2025-07-06T23:37:45.831194235Z" level=info msg="CreateContainer within sandbox \"a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:37:45.835161 containerd[1520]: time="2025-07-06T23:37:45.835124861Z" level=info msg="CreateContainer within sandbox \"0e0ade23f1881913e2b7b836d59c1030afaa6ce110d1f363ef45d6ecad460e88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c941edc619e30b2557a12969d69213072703a771d205b33da56b4e4564450024\"" Jul 6 23:37:45.837307 containerd[1520]: time="2025-07-06T23:37:45.837159300Z" level=info msg="StartContainer for \"c941edc619e30b2557a12969d69213072703a771d205b33da56b4e4564450024\"" Jul 6 23:37:45.839198 containerd[1520]: time="2025-07-06T23:37:45.839167896Z" level=info msg="connecting to shim c941edc619e30b2557a12969d69213072703a771d205b33da56b4e4564450024" address="unix:///run/containerd/s/0d69b0cc4c7099e4074e695570f197dea7ea33c0e2dd2125df25a4d4822ea4ff" protocol=ttrpc version=3 Jul 6 23:37:45.840368 containerd[1520]: time="2025-07-06T23:37:45.840329146Z" level=info msg="Container 2280580843688d51bade4b01d6f49f6696dfb06f5e116d2b4f00c59f2437f64c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:45.847596 containerd[1520]: time="2025-07-06T23:37:45.847487143Z" level=info msg="CreateContainer within sandbox \"a699216559fd38d978d6c739be71abc33fd641866f90b2ce58c1bba8df34b0f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2280580843688d51bade4b01d6f49f6696dfb06f5e116d2b4f00c59f2437f64c\"" Jul 6 23:37:45.848668 containerd[1520]: time="2025-07-06T23:37:45.848625152Z" level=info msg="StartContainer for \"2280580843688d51bade4b01d6f49f6696dfb06f5e116d2b4f00c59f2437f64c\"" Jul 6 23:37:45.850111 containerd[1520]: time="2025-07-06T23:37:45.850023661Z" level=info msg="connecting to shim 2280580843688d51bade4b01d6f49f6696dfb06f5e116d2b4f00c59f2437f64c" address="unix:///run/containerd/s/bca5a8ec9bbe10f9eb0c34169f1a1cb2df79b7a0ecbdb347914836dcd297823c" protocol=ttrpc version=3 Jul 6 23:37:45.879099 systemd[1]: Started cri-containerd-2280580843688d51bade4b01d6f49f6696dfb06f5e116d2b4f00c59f2437f64c.scope - libcontainer container 2280580843688d51bade4b01d6f49f6696dfb06f5e116d2b4f00c59f2437f64c. Jul 6 23:37:45.880169 systemd[1]: Started cri-containerd-c941edc619e30b2557a12969d69213072703a771d205b33da56b4e4564450024.scope - libcontainer container c941edc619e30b2557a12969d69213072703a771d205b33da56b4e4564450024. Jul 6 23:37:45.925506 containerd[1520]: time="2025-07-06T23:37:45.925387845Z" level=info msg="StartContainer for \"c941edc619e30b2557a12969d69213072703a771d205b33da56b4e4564450024\" returns successfully" Jul 6 23:37:45.952305 containerd[1520]: time="2025-07-06T23:37:45.952268897Z" level=info msg="StartContainer for \"2280580843688d51bade4b01d6f49f6696dfb06f5e116d2b4f00c59f2437f64c\" returns successfully" Jul 6 23:37:46.271001 containerd[1520]: time="2025-07-06T23:37:46.270323371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9ngv9,Uid:64788c1f-04c1-4e4c-8486-e17aa8abc6f7,Namespace:kube-system,Attempt:0,}" Jul 6 23:37:46.378193 systemd-networkd[1428]: cali3a96987467f: Link UP Jul 6 23:37:46.378719 systemd-networkd[1428]: cali3a96987467f: Gained carrier Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.309 [INFO][4732] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0 coredns-7c65d6cfc9- kube-system 64788c1f-04c1-4e4c-8486-e17aa8abc6f7 810 0 2025-07-06 23:37:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-9ngv9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a96987467f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9ngv9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9ngv9-" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.310 [INFO][4732] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9ngv9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.336 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" HandleID="k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Workload="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.336 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" HandleID="k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Workload="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400052a3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-9ngv9", "timestamp":"2025-07-06 23:37:46.336200552 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.336 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.336 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.336 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.346 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.350 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.354 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.356 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.358 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.358 [INFO][4743] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.360 [INFO][4743] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882 Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.364 [INFO][4743] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.373 [INFO][4743] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.373 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" host="localhost" Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.373 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:37:46.397749 containerd[1520]: 2025-07-06 23:37:46.373 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" HandleID="k8s-pod-network.5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Workload="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" Jul 6 23:37:46.398556 containerd[1520]: 2025-07-06 23:37:46.375 [INFO][4732] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9ngv9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"64788c1f-04c1-4e4c-8486-e17aa8abc6f7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-9ngv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a96987467f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:46.398556 containerd[1520]: 2025-07-06 23:37:46.375 [INFO][4732] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9ngv9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" Jul 6 23:37:46.398556 containerd[1520]: 2025-07-06 23:37:46.375 [INFO][4732] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a96987467f ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9ngv9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" Jul 6 23:37:46.398556 containerd[1520]: 2025-07-06 23:37:46.378 [INFO][4732] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9ngv9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" Jul 6 23:37:46.398556 containerd[1520]: 2025-07-06 23:37:46.379 [INFO][4732] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9ngv9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"64788c1f-04c1-4e4c-8486-e17aa8abc6f7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882", Pod:"coredns-7c65d6cfc9-9ngv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a96987467f", MAC:"8a:7c:e5:bc:7b:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:46.398556 containerd[1520]: 2025-07-06 23:37:46.394 [INFO][4732] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9ngv9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9ngv9-eth0" Jul 6 23:37:46.424764 containerd[1520]: time="2025-07-06T23:37:46.424674001Z" level=info msg="connecting to shim 5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882" address="unix:///run/containerd/s/258dfe4082c6e08f0a6a3dd294f489ae79b3824099a546408d0e6d5cd66a5f6d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:46.451802 kubelet[2634]: I0706 23:37:46.451759 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:37:46.457450 systemd[1]: Started cri-containerd-5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882.scope - libcontainer container 5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882. Jul 6 23:37:46.460572 kubelet[2634]: I0706 23:37:46.460214 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bk6h2" podStartSLOduration=33.460196607 podStartE2EDuration="33.460196607s" podCreationTimestamp="2025-07-06 23:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:37:46.458744737 +0000 UTC m=+38.282143451" watchObservedRunningTime="2025-07-06 23:37:46.460196607 +0000 UTC m=+38.283595321" Jul 6 23:37:46.490736 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:37:46.543262 containerd[1520]: time="2025-07-06T23:37:46.543136957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9ngv9,Uid:64788c1f-04c1-4e4c-8486-e17aa8abc6f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882\"" Jul 6 23:37:46.548872 containerd[1520]: time="2025-07-06T23:37:46.548771904Z" level=info msg="CreateContainer within sandbox \"5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:37:46.564113 containerd[1520]: time="2025-07-06T23:37:46.563876325Z" level=info msg="Container 8792593f7f993bfeb3996b0bf80223852f816d48602b43d4889ab15ceaa3cec7: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:46.571124 containerd[1520]: time="2025-07-06T23:37:46.571079550Z" level=info msg="CreateContainer within sandbox \"5f3b23d5f912683aa921a296ee90717a96f7349cb435b1d86e9854827ee4e882\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8792593f7f993bfeb3996b0bf80223852f816d48602b43d4889ab15ceaa3cec7\"" Jul 6 23:37:46.572285 containerd[1520]: time="2025-07-06T23:37:46.572244198Z" level=info msg="StartContainer for \"8792593f7f993bfeb3996b0bf80223852f816d48602b43d4889ab15ceaa3cec7\"" Jul 6 23:37:46.573808 containerd[1520]: time="2025-07-06T23:37:46.573765073Z" level=info msg="connecting to shim 8792593f7f993bfeb3996b0bf80223852f816d48602b43d4889ab15ceaa3cec7" address="unix:///run/containerd/s/258dfe4082c6e08f0a6a3dd294f489ae79b3824099a546408d0e6d5cd66a5f6d" protocol=ttrpc version=3 Jul 6 23:37:46.610078 systemd[1]: Started cri-containerd-8792593f7f993bfeb3996b0bf80223852f816d48602b43d4889ab15ceaa3cec7.scope - libcontainer container 8792593f7f993bfeb3996b0bf80223852f816d48602b43d4889ab15ceaa3cec7. Jul 6 23:37:46.662326 containerd[1520]: time="2025-07-06T23:37:46.662278605Z" level=info msg="StartContainer for \"8792593f7f993bfeb3996b0bf80223852f816d48602b43d4889ab15ceaa3cec7\" returns successfully" Jul 6 23:37:46.703197 containerd[1520]: time="2025-07-06T23:37:46.703150135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:46.704073 containerd[1520]: time="2025-07-06T23:37:46.704044243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:37:46.705983 containerd[1520]: time="2025-07-06T23:37:46.705948947Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:46.706956 containerd[1520]: time="2025-07-06T23:37:46.706827733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:46.708549 containerd[1520]: time="2025-07-06T23:37:46.708520341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.117614765s" Jul 6 23:37:46.708614 containerd[1520]: time="2025-07-06T23:37:46.708550464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:37:46.711001 containerd[1520]: time="2025-07-06T23:37:46.710968486Z" level=info msg="CreateContainer within sandbox \"89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:37:46.724944 containerd[1520]: time="2025-07-06T23:37:46.724799612Z" level=info msg="Container 04c0fda5c63b671e4338e62f453e53efec8a23a51a12b5ef5eaa7330ae85266f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:46.739074 containerd[1520]: time="2025-07-06T23:37:46.738997646Z" level=info msg="CreateContainer within sandbox \"89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"04c0fda5c63b671e4338e62f453e53efec8a23a51a12b5ef5eaa7330ae85266f\"" Jul 6 23:37:46.739635 containerd[1520]: time="2025-07-06T23:37:46.739483922Z" level=info msg="StartContainer for \"04c0fda5c63b671e4338e62f453e53efec8a23a51a12b5ef5eaa7330ae85266f\"" Jul 6 23:37:46.741789 containerd[1520]: time="2025-07-06T23:37:46.741748494Z" level=info msg="connecting to shim 04c0fda5c63b671e4338e62f453e53efec8a23a51a12b5ef5eaa7330ae85266f" address="unix:///run/containerd/s/f2724c6d622fef41d4e509fd3979d7279e07072ab4c3705b53a3b911250dda4c" protocol=ttrpc version=3 Jul 6 23:37:46.766284 systemd[1]: Started cri-containerd-04c0fda5c63b671e4338e62f453e53efec8a23a51a12b5ef5eaa7330ae85266f.scope - libcontainer container 04c0fda5c63b671e4338e62f453e53efec8a23a51a12b5ef5eaa7330ae85266f. Jul 6 23:37:46.817331 containerd[1520]: time="2025-07-06T23:37:46.817224520Z" level=info msg="StartContainer for \"04c0fda5c63b671e4338e62f453e53efec8a23a51a12b5ef5eaa7330ae85266f\" returns successfully" Jul 6 23:37:46.818374 containerd[1520]: time="2025-07-06T23:37:46.818325603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:37:47.091070 systemd-networkd[1428]: cali097e4aced39: Gained IPv6LL Jul 6 23:37:47.273099 containerd[1520]: time="2025-07-06T23:37:47.273034216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c6c57ffb-d8d7b,Uid:b1b7190a-9678-4dec-abee-43c139b7b0b4,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:47.438091 systemd-networkd[1428]: calie41c1e6b3cc: Link UP Jul 6 23:37:47.439130 systemd-networkd[1428]: calie41c1e6b3cc: Gained carrier Jul 6 23:37:47.453733 kubelet[2634]: I0706 23:37:47.453663 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-786b9888cb-v8zz6" podStartSLOduration=23.45355209 podStartE2EDuration="23.45355209s" podCreationTimestamp="2025-07-06 23:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:37:46.50521269 +0000 UTC m=+38.328611404" watchObservedRunningTime="2025-07-06 23:37:47.45355209 +0000 UTC m=+39.276950764" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.351 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0 calico-kube-controllers-55c6c57ffb- calico-system b1b7190a-9678-4dec-abee-43c139b7b0b4 814 0 2025-07-06 23:37:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55c6c57ffb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55c6c57ffb-d8d7b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie41c1e6b3cc [] [] }} ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Namespace="calico-system" Pod="calico-kube-controllers-55c6c57ffb-d8d7b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.351 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Namespace="calico-system" Pod="calico-kube-controllers-55c6c57ffb-d8d7b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.380 [INFO][4893] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" HandleID="k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Workload="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.380 [INFO][4893] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" HandleID="k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Workload="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55c6c57ffb-d8d7b", "timestamp":"2025-07-06 23:37:47.379990241 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.380 [INFO][4893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.380 [INFO][4893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.380 [INFO][4893] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.398 [INFO][4893] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.403 [INFO][4893] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.409 [INFO][4893] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.411 [INFO][4893] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.414 [INFO][4893] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.414 [INFO][4893] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.415 [INFO][4893] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.420 [INFO][4893] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.431 [INFO][4893] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.431 [INFO][4893] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" host="localhost" Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.431 [INFO][4893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:37:47.455719 containerd[1520]: 2025-07-06 23:37:47.431 [INFO][4893] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" HandleID="k8s-pod-network.551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Workload="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" Jul 6 23:37:47.457638 containerd[1520]: 2025-07-06 23:37:47.434 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Namespace="calico-system" Pod="calico-kube-controllers-55c6c57ffb-d8d7b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0", GenerateName:"calico-kube-controllers-55c6c57ffb-", Namespace:"calico-system", SelfLink:"", UID:"b1b7190a-9678-4dec-abee-43c139b7b0b4", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c6c57ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55c6c57ffb-d8d7b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie41c1e6b3cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:47.457638 containerd[1520]: 2025-07-06 23:37:47.435 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Namespace="calico-system" Pod="calico-kube-controllers-55c6c57ffb-d8d7b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" Jul 6 23:37:47.457638 containerd[1520]: 2025-07-06 23:37:47.435 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie41c1e6b3cc ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Namespace="calico-system" Pod="calico-kube-controllers-55c6c57ffb-d8d7b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" Jul 6 23:37:47.457638 containerd[1520]: 2025-07-06 23:37:47.438 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Namespace="calico-system" Pod="calico-kube-controllers-55c6c57ffb-d8d7b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" Jul 6 23:37:47.457638 containerd[1520]: 2025-07-06 23:37:47.438 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Namespace="calico-system" Pod="calico-kube-controllers-55c6c57ffb-d8d7b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0", GenerateName:"calico-kube-controllers-55c6c57ffb-", Namespace:"calico-system", SelfLink:"", UID:"b1b7190a-9678-4dec-abee-43c139b7b0b4", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c6c57ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff", Pod:"calico-kube-controllers-55c6c57ffb-d8d7b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie41c1e6b3cc", MAC:"d2:2e:4e:35:36:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:47.457638 containerd[1520]: 2025-07-06 23:37:47.451 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" Namespace="calico-system" Pod="calico-kube-controllers-55c6c57ffb-d8d7b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c6c57ffb--d8d7b-eth0" Jul 6 23:37:47.476385 kubelet[2634]: I0706 23:37:47.476347 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:37:47.490689 kubelet[2634]: I0706 23:37:47.489960 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9ngv9" podStartSLOduration=34.489941925 podStartE2EDuration="34.489941925s" podCreationTimestamp="2025-07-06 23:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:37:47.4890489 +0000 UTC m=+39.312464775" watchObservedRunningTime="2025-07-06 23:37:47.489941925 +0000 UTC m=+39.313340639" Jul 6 23:37:47.492318 containerd[1520]: time="2025-07-06T23:37:47.492279817Z" level=info msg="connecting to shim 551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff" address="unix:///run/containerd/s/d4ae134aa686f8e8b935296d19cb837c2332788adc36b6bc8e6140c44d288d4e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:47.527144 systemd[1]: Started cri-containerd-551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff.scope - libcontainer container 551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff. Jul 6 23:37:47.542083 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:37:47.562281 containerd[1520]: time="2025-07-06T23:37:47.562243522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c6c57ffb-d8d7b,Uid:b1b7190a-9678-4dec-abee-43c139b7b0b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff\"" Jul 6 23:37:47.603629 systemd-networkd[1428]: cali3a96987467f: Gained IPv6LL Jul 6 23:37:47.667122 systemd-networkd[1428]: cali5cb5767aa29: Gained IPv6LL Jul 6 23:37:47.733763 systemd-networkd[1428]: cali15578dc1af8: Gained IPv6LL Jul 6 23:37:48.099355 containerd[1520]: time="2025-07-06T23:37:48.099303420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:48.100766 containerd[1520]: time="2025-07-06T23:37:48.100709441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:37:48.101630 containerd[1520]: time="2025-07-06T23:37:48.101588943Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:48.103743 containerd[1520]: time="2025-07-06T23:37:48.103689054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:48.104444 containerd[1520]: time="2025-07-06T23:37:48.104409865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.28605374s" Jul 6 23:37:48.104444 containerd[1520]: time="2025-07-06T23:37:48.104440988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:37:48.105445 containerd[1520]: time="2025-07-06T23:37:48.105408857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:37:48.108153 containerd[1520]: time="2025-07-06T23:37:48.108119491Z" level=info msg="CreateContainer within sandbox \"89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:37:48.115941 containerd[1520]: time="2025-07-06T23:37:48.115302525Z" level=info msg="Container 897cf891123d1d478628256b6332f71fdf1e8af1068211a9c84e0453303a68e5: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:48.127125 containerd[1520]: time="2025-07-06T23:37:48.127082088Z" level=info msg="CreateContainer within sandbox \"89010a9ffabd8112405ffa86f3f1dcfbf9691b37d961fa99853e9bb4efb95917\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"897cf891123d1d478628256b6332f71fdf1e8af1068211a9c84e0453303a68e5\"" Jul 6 23:37:48.131010 containerd[1520]: time="2025-07-06T23:37:48.130948085Z" level=info msg="StartContainer for \"897cf891123d1d478628256b6332f71fdf1e8af1068211a9c84e0453303a68e5\"" Jul 6 23:37:48.132616 containerd[1520]: time="2025-07-06T23:37:48.132574602Z" level=info msg="connecting to shim 897cf891123d1d478628256b6332f71fdf1e8af1068211a9c84e0453303a68e5" address="unix:///run/containerd/s/f2724c6d622fef41d4e509fd3979d7279e07072ab4c3705b53a3b911250dda4c" protocol=ttrpc version=3 Jul 6 23:37:48.153093 systemd[1]: Started cri-containerd-897cf891123d1d478628256b6332f71fdf1e8af1068211a9c84e0453303a68e5.scope - libcontainer container 897cf891123d1d478628256b6332f71fdf1e8af1068211a9c84e0453303a68e5. Jul 6 23:37:48.193803 containerd[1520]: time="2025-07-06T23:37:48.193761701Z" level=info msg="StartContainer for \"897cf891123d1d478628256b6332f71fdf1e8af1068211a9c84e0453303a68e5\" returns successfully" Jul 6 23:37:48.269196 containerd[1520]: time="2025-07-06T23:37:48.269157979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-shs2j,Uid:563e0255-c29b-4292-9f17-d6aef8b5cd13,Namespace:calico-system,Attempt:0,}" Jul 6 23:37:48.353532 kubelet[2634]: I0706 23:37:48.353410 2634 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:37:48.353532 kubelet[2634]: I0706 23:37:48.353471 2634 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:37:48.389419 systemd-networkd[1428]: califac3ea9f91d: Link UP Jul 6 23:37:48.389624 systemd-networkd[1428]: califac3ea9f91d: Gained carrier Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.306 [INFO][5001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--shs2j-eth0 goldmane-58fd7646b9- calico-system 563e0255-c29b-4292-9f17-d6aef8b5cd13 807 0 2025-07-06 23:37:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-shs2j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califac3ea9f91d [] [] }} ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Namespace="calico-system" Pod="goldmane-58fd7646b9-shs2j" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--shs2j-" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.306 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Namespace="calico-system" Pod="goldmane-58fd7646b9-shs2j" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.335 [INFO][5017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" HandleID="k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Workload="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.335 [INFO][5017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" HandleID="k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Workload="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-shs2j", "timestamp":"2025-07-06 23:37:48.335271951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.335 [INFO][5017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.335 [INFO][5017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.335 [INFO][5017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.345 [INFO][5017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.352 [INFO][5017] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.357 [INFO][5017] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.359 [INFO][5017] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.363 [INFO][5017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.363 [INFO][5017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.367 [INFO][5017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.375 [INFO][5017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.383 [INFO][5017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.383 [INFO][5017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" host="localhost" Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.383 [INFO][5017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:37:48.403658 containerd[1520]: 2025-07-06 23:37:48.383 [INFO][5017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" HandleID="k8s-pod-network.3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Workload="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" Jul 6 23:37:48.404357 containerd[1520]: 2025-07-06 23:37:48.385 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Namespace="calico-system" Pod="goldmane-58fd7646b9-shs2j" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--shs2j-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"563e0255-c29b-4292-9f17-d6aef8b5cd13", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-shs2j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califac3ea9f91d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:48.404357 containerd[1520]: 2025-07-06 23:37:48.386 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Namespace="calico-system" Pod="goldmane-58fd7646b9-shs2j" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" Jul 6 23:37:48.404357 containerd[1520]: 2025-07-06 23:37:48.386 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califac3ea9f91d ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Namespace="calico-system" Pod="goldmane-58fd7646b9-shs2j" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" Jul 6 23:37:48.404357 containerd[1520]: 2025-07-06 23:37:48.389 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Namespace="calico-system" Pod="goldmane-58fd7646b9-shs2j" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" Jul 6 23:37:48.404357 containerd[1520]: 2025-07-06 23:37:48.390 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Namespace="calico-system" Pod="goldmane-58fd7646b9-shs2j" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--shs2j-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"563e0255-c29b-4292-9f17-d6aef8b5cd13", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed", Pod:"goldmane-58fd7646b9-shs2j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califac3ea9f91d", MAC:"46:de:45:ea:64:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:37:48.404357 containerd[1520]: 2025-07-06 23:37:48.401 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" Namespace="calico-system" Pod="goldmane-58fd7646b9-shs2j" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--shs2j-eth0" Jul 6 23:37:48.426103 containerd[1520]: time="2025-07-06T23:37:48.426046809Z" level=info msg="connecting to shim 3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed" address="unix:///run/containerd/s/258f42b5951e8dab13f962550cd06eac073085cc48b4621f0b34df9b476970d5" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:37:48.456089 systemd[1]: Started cri-containerd-3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed.scope - libcontainer container 3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed. Jul 6 23:37:48.469093 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:37:48.499628 kubelet[2634]: I0706 23:37:48.499477 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fflkr" podStartSLOduration=21.984122238 podStartE2EDuration="24.499428702s" podCreationTimestamp="2025-07-06 23:37:24 +0000 UTC" firstStartedPulling="2025-07-06 23:37:45.589959143 +0000 UTC m=+37.413357857" lastFinishedPulling="2025-07-06 23:37:48.105265607 +0000 UTC m=+39.928664321" observedRunningTime="2025-07-06 23:37:48.497338112 +0000 UTC m=+40.320736826" watchObservedRunningTime="2025-07-06 23:37:48.499428702 +0000 UTC m=+40.322827416" Jul 6 23:37:48.504283 containerd[1520]: time="2025-07-06T23:37:48.504231806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-shs2j,Uid:563e0255-c29b-4292-9f17-d6aef8b5cd13,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed\"" Jul 6 23:37:48.627067 systemd-networkd[1428]: calie41c1e6b3cc: Gained IPv6LL Jul 6 23:37:49.651053 systemd-networkd[1428]: califac3ea9f91d: Gained IPv6LL Jul 6 23:37:49.811973 containerd[1520]: time="2025-07-06T23:37:49.811600954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:49.812580 containerd[1520]: time="2025-07-06T23:37:49.812531059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:37:49.813579 containerd[1520]: time="2025-07-06T23:37:49.813546730Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:49.816809 containerd[1520]: time="2025-07-06T23:37:49.816764755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:49.817483 containerd[1520]: time="2025-07-06T23:37:49.817429641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.711985901s" Jul 6 23:37:49.817483 containerd[1520]: time="2025-07-06T23:37:49.817481044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:37:49.819224 containerd[1520]: time="2025-07-06T23:37:49.819181603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:37:49.825919 containerd[1520]: time="2025-07-06T23:37:49.825859989Z" level=info msg="CreateContainer within sandbox \"551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:37:49.833444 containerd[1520]: time="2025-07-06T23:37:49.832545535Z" level=info msg="Container cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:49.839600 containerd[1520]: time="2025-07-06T23:37:49.839550104Z" level=info msg="CreateContainer within sandbox \"551d0bdfb7e4f47d293f2478208362d340a9384f259f5b5461ddc1484933d4ff\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6\"" Jul 6 23:37:49.840200 containerd[1520]: time="2025-07-06T23:37:49.840065780Z" level=info msg="StartContainer for \"cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6\"" Jul 6 23:37:49.841497 containerd[1520]: time="2025-07-06T23:37:49.841458757Z" level=info msg="connecting to shim cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6" address="unix:///run/containerd/s/d4ae134aa686f8e8b935296d19cb837c2332788adc36b6bc8e6140c44d288d4e" protocol=ttrpc version=3 Jul 6 23:37:49.864141 systemd[1]: Started cri-containerd-cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6.scope - libcontainer container cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6. Jul 6 23:37:49.906828 containerd[1520]: time="2025-07-06T23:37:49.906690988Z" level=info msg="StartContainer for \"cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6\" returns successfully" Jul 6 23:37:50.508719 kubelet[2634]: I0706 23:37:50.508657 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55c6c57ffb-d8d7b" podStartSLOduration=23.254585761 podStartE2EDuration="25.508640833s" podCreationTimestamp="2025-07-06 23:37:25 +0000 UTC" firstStartedPulling="2025-07-06 23:37:47.564293553 +0000 UTC m=+39.387692267" lastFinishedPulling="2025-07-06 23:37:49.818348625 +0000 UTC m=+41.641747339" observedRunningTime="2025-07-06 23:37:50.508222965 +0000 UTC m=+42.331621679" watchObservedRunningTime="2025-07-06 23:37:50.508640833 +0000 UTC m=+42.332039547" Jul 6 23:37:50.821712 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:40834.service - OpenSSH per-connection server daemon (10.0.0.1:40834). Jul 6 23:37:50.945043 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 40834 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:50.949025 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:50.955285 systemd-logind[1503]: New session 8 of user core. Jul 6 23:37:50.965651 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:37:51.322738 sshd[5142]: Connection closed by 10.0.0.1 port 40834 Jul 6 23:37:51.323843 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:51.327701 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:37:51.328072 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:40834.service: Deactivated successfully. Jul 6 23:37:51.331554 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:37:51.332814 systemd-logind[1503]: Removed session 8. Jul 6 23:37:51.497559 kubelet[2634]: I0706 23:37:51.497522 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:37:52.515332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890178551.mount: Deactivated successfully. Jul 6 23:37:52.905276 containerd[1520]: time="2025-07-06T23:37:52.905166940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:52.907285 containerd[1520]: time="2025-07-06T23:37:52.907249955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:37:52.908494 containerd[1520]: time="2025-07-06T23:37:52.908437512Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:52.911620 containerd[1520]: time="2025-07-06T23:37:52.911536914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:52.912937 containerd[1520]: time="2025-07-06T23:37:52.912825397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.093606792s" Jul 6 23:37:52.912937 containerd[1520]: time="2025-07-06T23:37:52.912858359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:37:52.916385 containerd[1520]: time="2025-07-06T23:37:52.916031605Z" level=info msg="CreateContainer within sandbox \"3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:37:52.924302 containerd[1520]: time="2025-07-06T23:37:52.924249059Z" level=info msg="Container 5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:37:52.933884 containerd[1520]: time="2025-07-06T23:37:52.933748796Z" level=info msg="CreateContainer within sandbox \"3ec2d1df24da66d52de8cea94e7b5c1b26a7b206a1205560896be1f94d5c38ed\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3\"" Jul 6 23:37:52.934458 containerd[1520]: time="2025-07-06T23:37:52.934376557Z" level=info msg="StartContainer for \"5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3\"" Jul 6 23:37:52.936110 containerd[1520]: time="2025-07-06T23:37:52.936042985Z" level=info msg="connecting to shim 5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3" address="unix:///run/containerd/s/258f42b5951e8dab13f962550cd06eac073085cc48b4621f0b34df9b476970d5" protocol=ttrpc version=3 Jul 6 23:37:52.956091 systemd[1]: Started cri-containerd-5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3.scope - libcontainer container 5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3. Jul 6 23:37:52.999188 containerd[1520]: time="2025-07-06T23:37:52.997863520Z" level=info msg="StartContainer for \"5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3\" returns successfully" Jul 6 23:37:53.516440 kubelet[2634]: I0706 23:37:53.516376 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-shs2j" podStartSLOduration=24.109045932 podStartE2EDuration="28.516347385s" podCreationTimestamp="2025-07-06 23:37:25 +0000 UTC" firstStartedPulling="2025-07-06 23:37:48.506393241 +0000 UTC m=+40.329791955" lastFinishedPulling="2025-07-06 23:37:52.913694694 +0000 UTC m=+44.737093408" observedRunningTime="2025-07-06 23:37:53.515080944 +0000 UTC m=+45.338479658" watchObservedRunningTime="2025-07-06 23:37:53.516347385 +0000 UTC m=+45.339746139" Jul 6 23:37:54.507378 kubelet[2634]: I0706 23:37:54.505208 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:37:56.263545 containerd[1520]: time="2025-07-06T23:37:56.263500979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3\" id:\"88b2014921a9bad10a2e6cc600ca8634f9e7776ae50690e11fdd9408caf49450\" pid:5225 exited_at:{seconds:1751845076 nanos:262803378}" Jul 6 23:37:56.338383 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:39794.service - OpenSSH per-connection server daemon (10.0.0.1:39794). Jul 6 23:37:56.431069 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 39794 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:56.432616 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:56.436971 systemd-logind[1503]: New session 9 of user core. Jul 6 23:37:56.445086 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:37:56.654146 sshd[5241]: Connection closed by 10.0.0.1 port 39794 Jul 6 23:37:56.654430 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:56.658403 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:39794.service: Deactivated successfully. Jul 6 23:37:56.661871 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:37:56.662932 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:37:56.664696 systemd-logind[1503]: Removed session 9. Jul 6 23:37:57.609679 kubelet[2634]: I0706 23:37:57.608711 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:37:57.680269 containerd[1520]: time="2025-07-06T23:37:57.680047845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3\" id:\"c03569f4834bcd81564798fe968875a69e961a63eb3c6ab5f9906724b7d1309f\" pid:5265 exit_status:1 exited_at:{seconds:1751845077 nanos:679681424}" Jul 6 23:37:57.750605 containerd[1520]: time="2025-07-06T23:37:57.750569826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3\" id:\"5b93f7867f0ae54cb4b0f88be1d516f3cf13d385776473696422d26ec739e212\" pid:5288 exit_status:1 exited_at:{seconds:1751845077 nanos:750296490}" Jul 6 23:37:59.313986 kubelet[2634]: I0706 23:37:59.313888 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:37:59.347040 containerd[1520]: time="2025-07-06T23:37:59.347001508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6\" id:\"4ee253c46592d75e57afceb4975913934a26f37d70a1d681285d5554f42c7d3c\" pid:5314 exited_at:{seconds:1751845079 nanos:346703171}" Jul 6 23:37:59.386887 containerd[1520]: time="2025-07-06T23:37:59.386846009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd5edbcd90bda995eb1a9d024fc778f6e43fcaeb898867ee0015aecbbcafb3c6\" id:\"b41f516f21d3cdbe39031fba40f9142571410dc421a23b30515d44d17b8deb27\" pid:5337 exited_at:{seconds:1751845079 nanos:386619796}" Jul 6 23:38:01.671603 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:39806.service - OpenSSH per-connection server daemon (10.0.0.1:39806). Jul 6 23:38:01.739429 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 39806 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:01.740945 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:01.745719 systemd-logind[1503]: New session 10 of user core. Jul 6 23:38:01.761102 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:38:01.932402 sshd[5355]: Connection closed by 10.0.0.1 port 39806 Jul 6 23:38:01.932804 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:01.944901 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:39806.service: Deactivated successfully. Jul 6 23:38:01.947098 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:38:01.949456 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:38:01.954392 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:39810.service - OpenSSH per-connection server daemon (10.0.0.1:39810). Jul 6 23:38:01.955449 systemd-logind[1503]: Removed session 10. Jul 6 23:38:02.008744 sshd[5369]: Accepted publickey for core from 10.0.0.1 port 39810 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:02.010207 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:02.014248 systemd-logind[1503]: New session 11 of user core. Jul 6 23:38:02.032058 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:38:02.253162 sshd[5371]: Connection closed by 10.0.0.1 port 39810 Jul 6 23:38:02.253732 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:02.266711 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:39810.service: Deactivated successfully. Jul 6 23:38:02.271675 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:38:02.277993 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:38:02.281565 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:39814.service - OpenSSH per-connection server daemon (10.0.0.1:39814). Jul 6 23:38:02.282793 systemd-logind[1503]: Removed session 11. Jul 6 23:38:02.336023 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 39814 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:02.337239 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:02.341800 systemd-logind[1503]: New session 12 of user core. Jul 6 23:38:02.353087 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:38:02.526733 sshd[5384]: Connection closed by 10.0.0.1 port 39814 Jul 6 23:38:02.527087 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:02.531067 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:39814.service: Deactivated successfully. Jul 6 23:38:02.534224 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:38:02.535951 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:38:02.537622 systemd-logind[1503]: Removed session 12. Jul 6 23:38:05.745042 kubelet[2634]: I0706 23:38:05.744863 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:38:07.508967 containerd[1520]: time="2025-07-06T23:38:07.508786260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60e980e62a3adb057088d776f81917204e8c6a285aa3adaa2a4d78374e03ef75\" id:\"a7e041d97f5cb510ea699a1add04c8c5b4ec36fea04fa45ee98a8f42b6f40e00\" pid:5416 exited_at:{seconds:1751845087 nanos:508470244}" Jul 6 23:38:07.546392 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:36066.service - OpenSSH per-connection server daemon (10.0.0.1:36066). Jul 6 23:38:07.619450 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 36066 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:07.621316 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:07.627102 systemd-logind[1503]: New session 13 of user core. Jul 6 23:38:07.637137 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:38:07.706138 kubelet[2634]: I0706 23:38:07.701679 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:38:07.879299 sshd[5431]: Connection closed by 10.0.0.1 port 36066 Jul 6 23:38:07.879840 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:07.893997 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:36066.service: Deactivated successfully. Jul 6 23:38:07.897031 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:38:07.899017 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:38:07.901405 systemd-logind[1503]: Removed session 13. Jul 6 23:38:07.903556 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:36072.service - OpenSSH per-connection server daemon (10.0.0.1:36072). Jul 6 23:38:07.954953 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 36072 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:07.955565 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:07.960177 systemd-logind[1503]: New session 14 of user core. Jul 6 23:38:07.970104 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:38:08.215086 sshd[5450]: Connection closed by 10.0.0.1 port 36072 Jul 6 23:38:08.214815 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:08.226446 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:36072.service: Deactivated successfully. Jul 6 23:38:08.230105 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:38:08.233477 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:38:08.237401 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:36088.service - OpenSSH per-connection server daemon (10.0.0.1:36088). Jul 6 23:38:08.238191 systemd-logind[1503]: Removed session 14. Jul 6 23:38:08.303586 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 36088 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:08.305250 sshd-session[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:08.310683 systemd-logind[1503]: New session 15 of user core. Jul 6 23:38:08.321142 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:38:10.106039 sshd[5465]: Connection closed by 10.0.0.1 port 36088 Jul 6 23:38:10.106589 sshd-session[5461]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:10.119164 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:36088.service: Deactivated successfully. Jul 6 23:38:10.123573 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:38:10.124172 systemd[1]: session-15.scope: Consumed 575ms CPU time, 70.8M memory peak. Jul 6 23:38:10.128566 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:38:10.135100 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:36104.service - OpenSSH per-connection server daemon (10.0.0.1:36104). Jul 6 23:38:10.139462 systemd-logind[1503]: Removed session 15. Jul 6 23:38:10.213811 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 36104 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:10.215314 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:10.220901 systemd-logind[1503]: New session 16 of user core. Jul 6 23:38:10.230105 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:38:10.561650 sshd[5490]: Connection closed by 10.0.0.1 port 36104 Jul 6 23:38:10.562239 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:10.571992 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:36104.service: Deactivated successfully. Jul 6 23:38:10.578310 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:38:10.579484 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:38:10.582760 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:36106.service - OpenSSH per-connection server daemon (10.0.0.1:36106). Jul 6 23:38:10.584256 systemd-logind[1503]: Removed session 16. Jul 6 23:38:10.636731 sshd[5503]: Accepted publickey for core from 10.0.0.1 port 36106 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:10.638236 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:10.642530 systemd-logind[1503]: New session 17 of user core. Jul 6 23:38:10.649162 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:38:10.802069 sshd[5505]: Connection closed by 10.0.0.1 port 36106 Jul 6 23:38:10.802413 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:10.806725 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:36106.service: Deactivated successfully. Jul 6 23:38:10.808726 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:38:10.809826 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:38:10.811215 systemd-logind[1503]: Removed session 17. Jul 6 23:38:15.814485 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:45232.service - OpenSSH per-connection server daemon (10.0.0.1:45232). Jul 6 23:38:15.869255 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 45232 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:15.870766 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:15.875556 systemd-logind[1503]: New session 18 of user core. Jul 6 23:38:15.882117 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:38:16.021652 sshd[5526]: Connection closed by 10.0.0.1 port 45232 Jul 6 23:38:16.022029 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:16.025385 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:45232.service: Deactivated successfully. Jul 6 23:38:16.027167 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:38:16.028078 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:38:16.029399 systemd-logind[1503]: Removed session 18. Jul 6 23:38:21.033598 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:45236.service - OpenSSH per-connection server daemon (10.0.0.1:45236). Jul 6 23:38:21.093958 sshd[5541]: Accepted publickey for core from 10.0.0.1 port 45236 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:21.095520 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:21.103170 systemd-logind[1503]: New session 19 of user core. Jul 6 23:38:21.107064 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:38:21.247158 sshd[5543]: Connection closed by 10.0.0.1 port 45236 Jul 6 23:38:21.247667 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:21.251239 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:45236.service: Deactivated successfully. Jul 6 23:38:21.253157 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:38:21.254779 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:38:21.256374 systemd-logind[1503]: Removed session 19. Jul 6 23:38:26.263600 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:37248.service - OpenSSH per-connection server daemon (10.0.0.1:37248). Jul 6 23:38:26.333574 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 37248 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:26.336888 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:26.344270 systemd-logind[1503]: New session 20 of user core. Jul 6 23:38:26.357154 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:38:26.522896 sshd[5564]: Connection closed by 10.0.0.1 port 37248 Jul 6 23:38:26.524301 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:26.531181 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:37248.service: Deactivated successfully. Jul 6 23:38:26.533770 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:38:26.535023 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:38:26.536604 systemd-logind[1503]: Removed session 20. Jul 6 23:38:27.678888 containerd[1520]: time="2025-07-06T23:38:27.678832084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ea4ec051ee332cc31891ce715d68ee6b9793a862ed4353efd075839eea1f4b3\" id:\"c4015b3abd81e7802ebc2c4cd1b250018ee00bd4222b298f2b8fdfb2c4aa9a2c\" pid:5588 exited_at:{seconds:1751845107 nanos:678453771}"