Jul 10 23:53:25.780724 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 23:53:25.780745 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Jul 10 22:17:59 -00 2025 Jul 10 23:53:25.780755 kernel: KASLR enabled Jul 10 23:53:25.780760 kernel: efi: EFI v2.7 by EDK II Jul 10 23:53:25.780766 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 10 23:53:25.780771 kernel: random: crng init done Jul 10 23:53:25.780785 kernel: secureboot: Secure boot disabled Jul 10 23:53:25.780792 kernel: ACPI: Early table checksum verification disabled Jul 10 23:53:25.780797 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 10 23:53:25.780805 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 23:53:25.780811 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780817 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780823 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780828 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780835 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780843 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780849 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780855 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780860 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:53:25.780866 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 23:53:25.780872 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 10 23:53:25.780878 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:53:25.780884 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] Jul 10 23:53:25.780890 kernel: Zone ranges: Jul 10 23:53:25.780896 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:53:25.780903 kernel: DMA32 empty Jul 10 23:53:25.780909 kernel: Normal empty Jul 10 23:53:25.780915 kernel: Device empty Jul 10 23:53:25.780921 kernel: Movable zone start for each node Jul 10 23:53:25.780927 kernel: Early memory node ranges Jul 10 23:53:25.780932 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 10 23:53:25.780938 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 10 23:53:25.780944 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 10 23:53:25.780950 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 10 23:53:25.780956 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 10 23:53:25.780962 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 10 23:53:25.780968 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 10 23:53:25.780975 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 10 23:53:25.780981 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 10 23:53:25.780987 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 23:53:25.780995 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 23:53:25.781001 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 23:53:25.781008 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 23:53:25.781016 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:53:25.781022 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 23:53:25.781029 kernel: psci: probing for conduit method from ACPI. Jul 10 23:53:25.781035 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 23:53:25.781041 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 23:53:25.781047 kernel: psci: Trusted OS migration not required Jul 10 23:53:25.781053 kernel: psci: SMC Calling Convention v1.1 Jul 10 23:53:25.781060 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 23:53:25.781066 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 10 23:53:25.781072 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 10 23:53:25.781080 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 23:53:25.781087 kernel: Detected PIPT I-cache on CPU0 Jul 10 23:53:25.781093 kernel: CPU features: detected: GIC system register CPU interface Jul 10 23:53:25.781099 kernel: CPU features: detected: Spectre-v4 Jul 10 23:53:25.781106 kernel: CPU features: detected: Spectre-BHB Jul 10 23:53:25.781112 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 23:53:25.781118 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 23:53:25.781125 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 23:53:25.781131 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 23:53:25.781138 kernel: alternatives: applying boot alternatives Jul 10 23:53:25.781145 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9ae0b1f40710648305be8f7e436b6937e65ac0b33eb84d1b5b7411684b4e7538 Jul 10 23:53:25.781153 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 23:53:25.781160 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 23:53:25.781167 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 23:53:25.781173 kernel: Fallback order for Node 0: 0 Jul 10 23:53:25.781179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 10 23:53:25.781185 kernel: Policy zone: DMA Jul 10 23:53:25.781191 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 23:53:25.781198 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 10 23:53:25.781204 kernel: software IO TLB: area num 4. Jul 10 23:53:25.781211 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 10 23:53:25.781217 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) Jul 10 23:53:25.781223 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 23:53:25.781231 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 23:53:25.781238 kernel: rcu: RCU event tracing is enabled. Jul 10 23:53:25.781245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 23:53:25.781252 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 23:53:25.781259 kernel: Tracing variant of Tasks RCU enabled. Jul 10 23:53:25.781265 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 23:53:25.781272 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 23:53:25.781278 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 23:53:25.781284 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 23:53:25.781291 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 23:53:25.781297 kernel: GICv3: 256 SPIs implemented Jul 10 23:53:25.781305 kernel: GICv3: 0 Extended SPIs implemented Jul 10 23:53:25.781311 kernel: Root IRQ handler: gic_handle_irq Jul 10 23:53:25.781317 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 23:53:25.781323 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 10 23:53:25.781330 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 23:53:25.781336 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 23:53:25.781342 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 10 23:53:25.781349 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 10 23:53:25.781355 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 10 23:53:25.781362 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 10 23:53:25.781368 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 23:53:25.781374 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:53:25.781382 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 23:53:25.781388 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 23:53:25.781395 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 23:53:25.781401 kernel: arm-pv: using stolen time PV Jul 10 23:53:25.781408 kernel: Console: colour dummy device 80x25 Jul 10 23:53:25.781414 kernel: ACPI: Core revision 20240827 Jul 10 23:53:25.781421 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 23:53:25.781427 kernel: pid_max: default: 32768 minimum: 301 Jul 10 23:53:25.781443 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 23:53:25.781452 kernel: landlock: Up and running. Jul 10 23:53:25.781459 kernel: SELinux: Initializing. Jul 10 23:53:25.781465 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:53:25.781472 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:53:25.781479 kernel: rcu: Hierarchical SRCU implementation. Jul 10 23:53:25.781485 kernel: rcu: Max phase no-delay instances is 400. Jul 10 23:53:25.781492 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 23:53:25.781499 kernel: Remapping and enabling EFI services. Jul 10 23:53:25.781505 kernel: smp: Bringing up secondary CPUs ... Jul 10 23:53:25.781512 kernel: Detected PIPT I-cache on CPU1 Jul 10 23:53:25.781524 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 23:53:25.781531 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 10 23:53:25.781540 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:53:25.781546 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 23:53:25.781553 kernel: Detected PIPT I-cache on CPU2 Jul 10 23:53:25.781560 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 23:53:25.781567 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 10 23:53:25.781575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:53:25.781582 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 23:53:25.781589 kernel: Detected PIPT I-cache on CPU3 Jul 10 23:53:25.781596 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 23:53:25.781603 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 10 23:53:25.781610 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:53:25.781616 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 23:53:25.781623 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 23:53:25.781630 kernel: SMP: Total of 4 processors activated. Jul 10 23:53:25.781636 kernel: CPU: All CPU(s) started at EL1 Jul 10 23:53:25.781645 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 23:53:25.781652 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 23:53:25.781659 kernel: CPU features: detected: Common not Private translations Jul 10 23:53:25.781665 kernel: CPU features: detected: CRC32 instructions Jul 10 23:53:25.781672 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 23:53:25.781679 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 23:53:25.781686 kernel: CPU features: detected: LSE atomic instructions Jul 10 23:53:25.781693 kernel: CPU features: detected: Privileged Access Never Jul 10 23:53:25.781700 kernel: CPU features: detected: RAS Extension Support Jul 10 23:53:25.781708 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 23:53:25.781715 kernel: alternatives: applying system-wide alternatives Jul 10 23:53:25.781722 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 10 23:53:25.781729 kernel: Memory: 2440420K/2572288K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 125920K reserved, 0K cma-reserved) Jul 10 23:53:25.781736 kernel: devtmpfs: initialized Jul 10 23:53:25.781743 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 23:53:25.781750 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 23:53:25.781756 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 23:53:25.781763 kernel: 0 pages in range for non-PLT usage Jul 10 23:53:25.781773 kernel: 508448 pages in range for PLT usage Jul 10 23:53:25.781785 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 23:53:25.781792 kernel: SMBIOS 3.0.0 present. Jul 10 23:53:25.781799 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 10 23:53:25.781806 kernel: DMI: Memory slots populated: 1/1 Jul 10 23:53:25.781812 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 23:53:25.781819 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 23:53:25.781826 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 23:53:25.781833 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 23:53:25.781842 kernel: audit: initializing netlink subsys (disabled) Jul 10 23:53:25.781849 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 10 23:53:25.781855 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 23:53:25.781862 kernel: cpuidle: using governor menu Jul 10 23:53:25.781869 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 23:53:25.781875 kernel: ASID allocator initialised with 32768 entries Jul 10 23:53:25.781882 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 23:53:25.781889 kernel: Serial: AMBA PL011 UART driver Jul 10 23:53:25.781896 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 23:53:25.781904 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 23:53:25.781911 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 23:53:25.781918 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 23:53:25.781924 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 23:53:25.781931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 23:53:25.781938 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 23:53:25.781945 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 23:53:25.781952 kernel: ACPI: Added _OSI(Module Device) Jul 10 23:53:25.781958 kernel: ACPI: Added _OSI(Processor Device) Jul 10 23:53:25.781966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 23:53:25.781973 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 23:53:25.781980 kernel: ACPI: Interpreter enabled Jul 10 23:53:25.781987 kernel: ACPI: Using GIC for interrupt routing Jul 10 23:53:25.781994 kernel: ACPI: MCFG table detected, 1 entries Jul 10 23:53:25.782000 kernel: ACPI: CPU0 has been hot-added Jul 10 23:53:25.782007 kernel: ACPI: CPU1 has been hot-added Jul 10 23:53:25.782014 kernel: ACPI: CPU2 has been hot-added Jul 10 23:53:25.782020 kernel: ACPI: CPU3 has been hot-added Jul 10 23:53:25.782028 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 23:53:25.782035 kernel: printk: legacy console [ttyAMA0] enabled Jul 10 23:53:25.782042 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 23:53:25.782180 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 23:53:25.782245 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 23:53:25.782305 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 23:53:25.782362 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 23:53:25.782422 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 23:53:25.782504 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 23:53:25.782514 kernel: PCI host bridge to bus 0000:00 Jul 10 23:53:25.782599 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 23:53:25.782658 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 23:53:25.782711 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 23:53:25.782762 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 23:53:25.782845 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 10 23:53:25.782927 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 23:53:25.782991 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 10 23:53:25.783051 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 10 23:53:25.783110 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 23:53:25.783168 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 10 23:53:25.783228 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 10 23:53:25.783289 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 10 23:53:25.783341 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 23:53:25.783393 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 23:53:25.783455 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 23:53:25.783465 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 23:53:25.783472 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 23:53:25.783479 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 23:53:25.783486 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 23:53:25.783495 kernel: iommu: Default domain type: Translated Jul 10 23:53:25.783502 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 23:53:25.783509 kernel: efivars: Registered efivars operations Jul 10 23:53:25.783516 kernel: vgaarb: loaded Jul 10 23:53:25.783523 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 23:53:25.783530 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 23:53:25.783537 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 23:53:25.783544 kernel: pnp: PnP ACPI init Jul 10 23:53:25.783610 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 23:53:25.783622 kernel: pnp: PnP ACPI: found 1 devices Jul 10 23:53:25.783629 kernel: NET: Registered PF_INET protocol family Jul 10 23:53:25.783636 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 23:53:25.783643 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 23:53:25.783650 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 23:53:25.783657 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 23:53:25.783664 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 23:53:25.783672 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 23:53:25.783680 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:53:25.783687 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:53:25.783694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 23:53:25.783701 kernel: PCI: CLS 0 bytes, default 64 Jul 10 23:53:25.783708 kernel: kvm [1]: HYP mode not available Jul 10 23:53:25.783715 kernel: Initialise system trusted keyrings Jul 10 23:53:25.783722 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 23:53:25.783728 kernel: Key type asymmetric registered Jul 10 23:53:25.783735 kernel: Asymmetric key parser 'x509' registered Jul 10 23:53:25.783744 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 23:53:25.783751 kernel: io scheduler mq-deadline registered Jul 10 23:53:25.783758 kernel: io scheduler kyber registered Jul 10 23:53:25.783765 kernel: io scheduler bfq registered Jul 10 23:53:25.783772 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 23:53:25.783786 kernel: ACPI: button: Power Button [PWRB] Jul 10 23:53:25.783794 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 23:53:25.783857 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 23:53:25.783867 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 23:53:25.783876 kernel: thunder_xcv, ver 1.0 Jul 10 23:53:25.783883 kernel: thunder_bgx, ver 1.0 Jul 10 23:53:25.783890 kernel: nicpf, ver 1.0 Jul 10 23:53:25.783897 kernel: nicvf, ver 1.0 Jul 10 23:53:25.783966 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 23:53:25.784023 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T23:53:25 UTC (1752191605) Jul 10 23:53:25.784033 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 23:53:25.784040 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 10 23:53:25.784049 kernel: watchdog: NMI not fully supported Jul 10 23:53:25.784056 kernel: watchdog: Hard watchdog permanently disabled Jul 10 23:53:25.784063 kernel: NET: Registered PF_INET6 protocol family Jul 10 23:53:25.784070 kernel: Segment Routing with IPv6 Jul 10 23:53:25.784076 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 23:53:25.784083 kernel: NET: Registered PF_PACKET protocol family Jul 10 23:53:25.784090 kernel: Key type dns_resolver registered Jul 10 23:53:25.784097 kernel: registered taskstats version 1 Jul 10 23:53:25.784104 kernel: Loading compiled-in X.509 certificates Jul 10 23:53:25.784112 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0718d62a7a0702c0da490764fdc6ec06d7382bc1' Jul 10 23:53:25.784119 kernel: Demotion targets for Node 0: null Jul 10 23:53:25.784126 kernel: Key type .fscrypt registered Jul 10 23:53:25.784132 kernel: Key type fscrypt-provisioning registered Jul 10 23:53:25.784139 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 23:53:25.784146 kernel: ima: Allocated hash algorithm: sha1 Jul 10 23:53:25.784153 kernel: ima: No architecture policies found Jul 10 23:53:25.784160 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 23:53:25.784167 kernel: clk: Disabling unused clocks Jul 10 23:53:25.784176 kernel: PM: genpd: Disabling unused power domains Jul 10 23:53:25.784183 kernel: Warning: unable to open an initial console. Jul 10 23:53:25.784190 kernel: Freeing unused kernel memory: 39488K Jul 10 23:53:25.784197 kernel: Run /init as init process Jul 10 23:53:25.784204 kernel: with arguments: Jul 10 23:53:25.784210 kernel: /init Jul 10 23:53:25.784217 kernel: with environment: Jul 10 23:53:25.784224 kernel: HOME=/ Jul 10 23:53:25.784231 kernel: TERM=linux Jul 10 23:53:25.784239 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 23:53:25.784247 systemd[1]: Successfully made /usr/ read-only. Jul 10 23:53:25.784257 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:53:25.784265 systemd[1]: Detected virtualization kvm. Jul 10 23:53:25.784272 systemd[1]: Detected architecture arm64. Jul 10 23:53:25.784279 systemd[1]: Running in initrd. Jul 10 23:53:25.784286 systemd[1]: No hostname configured, using default hostname. Jul 10 23:53:25.784296 systemd[1]: Hostname set to . Jul 10 23:53:25.784303 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:53:25.784310 systemd[1]: Queued start job for default target initrd.target. Jul 10 23:53:25.784318 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:53:25.784325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:53:25.784333 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 23:53:25.784340 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:53:25.784348 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 23:53:25.784357 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 23:53:25.784366 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 23:53:25.784373 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 23:53:25.784381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:53:25.784388 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:53:25.784395 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:53:25.784403 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:53:25.784411 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:53:25.784418 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:53:25.784426 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:53:25.784451 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:53:25.784460 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 23:53:25.784468 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 23:53:25.784475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:53:25.784482 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:53:25.784490 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:53:25.784500 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:53:25.784507 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 23:53:25.784515 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:53:25.784522 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 23:53:25.784530 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 23:53:25.784538 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 23:53:25.784545 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:53:25.784553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:53:25.784562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:53:25.784570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:53:25.784578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 23:53:25.784585 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 23:53:25.784593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:53:25.784620 systemd-journald[244]: Collecting audit messages is disabled. Jul 10 23:53:25.784639 systemd-journald[244]: Journal started Jul 10 23:53:25.784660 systemd-journald[244]: Runtime Journal (/run/log/journal/043cc503cd2b400ebc0357d3ccd5f40f) is 6M, max 48.5M, 42.4M free. Jul 10 23:53:25.775065 systemd-modules-load[246]: Inserted module 'overlay' Jul 10 23:53:25.787151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:53:25.789447 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 23:53:25.791140 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:53:25.791617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:53:25.793982 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 10 23:53:25.794696 kernel: Bridge firewalling registered Jul 10 23:53:25.795230 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:53:25.796800 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:53:25.798512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:53:25.806683 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:53:25.810603 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:53:25.812276 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 23:53:25.815206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:53:25.818863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:53:25.824221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:53:25.825638 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:53:25.828217 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 23:53:25.831144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:53:25.857288 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9ae0b1f40710648305be8f7e436b6937e65ac0b33eb84d1b5b7411684b4e7538 Jul 10 23:53:25.873182 systemd-resolved[289]: Positive Trust Anchors: Jul 10 23:53:25.873201 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:53:25.873231 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:53:25.878178 systemd-resolved[289]: Defaulting to hostname 'linux'. Jul 10 23:53:25.879483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:53:25.882197 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:53:25.938467 kernel: SCSI subsystem initialized Jul 10 23:53:25.943450 kernel: Loading iSCSI transport class v2.0-870. Jul 10 23:53:25.950449 kernel: iscsi: registered transport (tcp) Jul 10 23:53:25.965473 kernel: iscsi: registered transport (qla4xxx) Jul 10 23:53:25.965517 kernel: QLogic iSCSI HBA Driver Jul 10 23:53:25.982167 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:53:26.003349 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:53:26.005290 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:53:26.056038 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 23:53:26.058356 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 23:53:26.119462 kernel: raid6: neonx8 gen() 15774 MB/s Jul 10 23:53:26.136459 kernel: raid6: neonx4 gen() 15823 MB/s Jul 10 23:53:26.153448 kernel: raid6: neonx2 gen() 13243 MB/s Jul 10 23:53:26.170449 kernel: raid6: neonx1 gen() 10417 MB/s Jul 10 23:53:26.187457 kernel: raid6: int64x8 gen() 6895 MB/s Jul 10 23:53:26.204457 kernel: raid6: int64x4 gen() 7349 MB/s Jul 10 23:53:26.221448 kernel: raid6: int64x2 gen() 6101 MB/s Jul 10 23:53:26.238450 kernel: raid6: int64x1 gen() 5050 MB/s Jul 10 23:53:26.238468 kernel: raid6: using algorithm neonx4 gen() 15823 MB/s Jul 10 23:53:26.255465 kernel: raid6: .... xor() 12371 MB/s, rmw enabled Jul 10 23:53:26.255500 kernel: raid6: using neon recovery algorithm Jul 10 23:53:26.260658 kernel: xor: measuring software checksum speed Jul 10 23:53:26.260679 kernel: 8regs : 21624 MB/sec Jul 10 23:53:26.261703 kernel: 32regs : 21216 MB/sec Jul 10 23:53:26.261717 kernel: arm64_neon : 28128 MB/sec Jul 10 23:53:26.261725 kernel: xor: using function: arm64_neon (28128 MB/sec) Jul 10 23:53:26.317467 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 23:53:26.324286 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:53:26.326676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:53:26.363984 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jul 10 23:53:26.368305 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:53:26.370203 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 23:53:26.392722 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Jul 10 23:53:26.417063 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:53:26.420593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:53:26.471867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:53:26.475022 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 23:53:26.526451 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 23:53:26.530391 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 23:53:26.534634 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 23:53:26.534696 kernel: GPT:9289727 != 19775487 Jul 10 23:53:26.534708 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 23:53:26.535519 kernel: GPT:9289727 != 19775487 Jul 10 23:53:26.535551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:53:26.535725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:53:26.539952 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 23:53:26.539976 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:53:26.539941 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:53:26.541643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:53:26.577627 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 23:53:26.579371 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 23:53:26.580395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:53:26.589288 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 23:53:26.596535 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 23:53:26.597454 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 23:53:26.606895 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 23:53:26.607885 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:53:26.609396 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:53:26.611008 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:53:26.613272 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 23:53:26.614951 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 23:53:26.631681 disk-uuid[591]: Primary Header is updated. Jul 10 23:53:26.631681 disk-uuid[591]: Secondary Entries is updated. Jul 10 23:53:26.631681 disk-uuid[591]: Secondary Header is updated. Jul 10 23:53:26.636450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:53:26.639749 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:53:27.650514 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:53:27.651351 disk-uuid[595]: The operation has completed successfully. Jul 10 23:53:27.679653 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 23:53:27.679769 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 23:53:27.707201 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 23:53:27.723397 sh[611]: Success Jul 10 23:53:27.740254 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 23:53:27.740292 kernel: device-mapper: uevent: version 1.0.3 Jul 10 23:53:27.743457 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 23:53:27.751564 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 10 23:53:27.778325 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 23:53:27.780920 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 23:53:27.796466 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 23:53:27.804083 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 23:53:27.804118 kernel: BTRFS: device fsid 1d7bf05b-5ff9-431d-b4bb-8cc553220034 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (623) Jul 10 23:53:27.806132 kernel: BTRFS info (device dm-0): first mount of filesystem 1d7bf05b-5ff9-431d-b4bb-8cc553220034 Jul 10 23:53:27.806147 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:53:27.806157 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 23:53:27.810567 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 23:53:27.811613 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 23:53:27.812552 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 23:53:27.813301 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 23:53:27.815733 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 23:53:27.839963 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (654) Jul 10 23:53:27.840017 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:53:27.840028 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:53:27.840577 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:53:27.848571 kernel: BTRFS info (device vda6): last unmount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:53:27.848744 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 23:53:27.850649 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 23:53:27.912494 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:53:27.916401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:53:27.960027 systemd-networkd[795]: lo: Link UP Jul 10 23:53:27.960039 systemd-networkd[795]: lo: Gained carrier Jul 10 23:53:27.960915 systemd-networkd[795]: Enumeration completed Jul 10 23:53:27.961012 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:53:27.961656 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:53:27.961660 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:53:27.962572 systemd-networkd[795]: eth0: Link UP Jul 10 23:53:27.962575 systemd-networkd[795]: eth0: Gained carrier Jul 10 23:53:27.962584 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:53:27.962718 systemd[1]: Reached target network.target - Network. Jul 10 23:53:27.983484 systemd-networkd[795]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 23:53:27.988491 ignition[701]: Ignition 2.21.0 Jul 10 23:53:27.988507 ignition[701]: Stage: fetch-offline Jul 10 23:53:27.988541 ignition[701]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:53:27.988549 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:53:27.988732 ignition[701]: parsed url from cmdline: "" Jul 10 23:53:27.988735 ignition[701]: no config URL provided Jul 10 23:53:27.988739 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:53:27.988746 ignition[701]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:53:27.988763 ignition[701]: op(1): [started] loading QEMU firmware config module Jul 10 23:53:27.988767 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 23:53:27.994584 ignition[701]: op(1): [finished] loading QEMU firmware config module Jul 10 23:53:28.033189 ignition[701]: parsing config with SHA512: e75810c891bbdcec7e9e333e730c147c11e8dfba7d911ac20281d447b2ed9988af7bce49e07db4ee8171f57d5f1b29c2c9e8f92c1a6e0aead4228ae3765a1097 Jul 10 23:53:28.039117 unknown[701]: fetched base config from "system" Jul 10 23:53:28.039130 unknown[701]: fetched user config from "qemu" Jul 10 23:53:28.039613 ignition[701]: fetch-offline: fetch-offline passed Jul 10 23:53:28.039680 ignition[701]: Ignition finished successfully Jul 10 23:53:28.041941 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:53:28.043498 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 23:53:28.044297 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 23:53:28.072551 ignition[809]: Ignition 2.21.0 Jul 10 23:53:28.072566 ignition[809]: Stage: kargs Jul 10 23:53:28.072953 ignition[809]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:53:28.072968 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:53:28.074643 ignition[809]: kargs: kargs passed Jul 10 23:53:28.075038 ignition[809]: Ignition finished successfully Jul 10 23:53:28.079174 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 23:53:28.080922 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 23:53:28.101130 ignition[818]: Ignition 2.21.0 Jul 10 23:53:28.101149 ignition[818]: Stage: disks Jul 10 23:53:28.101294 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:53:28.101303 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:53:28.102983 ignition[818]: disks: disks passed Jul 10 23:53:28.103053 ignition[818]: Ignition finished successfully Jul 10 23:53:28.104704 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 23:53:28.105643 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 23:53:28.106878 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 23:53:28.108312 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:53:28.109769 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:53:28.111229 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:53:28.113376 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 23:53:28.139640 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 23:53:28.144383 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 23:53:28.146667 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 23:53:28.212453 kernel: EXT4-fs (vda9): mounted filesystem 5e67f91a-7210-47f1-85b9-a7aa031a1904 r/w with ordered data mode. Quota mode: none. Jul 10 23:53:28.212659 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 23:53:28.213689 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 23:53:28.216086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:53:28.217450 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 23:53:28.218209 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 23:53:28.218246 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 23:53:28.218267 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:53:28.228284 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 23:53:28.231167 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 23:53:28.233237 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (836) Jul 10 23:53:28.234107 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:53:28.234828 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:53:28.235441 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:53:28.238429 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:53:28.274416 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 23:53:28.278466 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Jul 10 23:53:28.282484 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 23:53:28.285961 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 23:53:28.352076 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 23:53:28.354263 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 23:53:28.355719 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 23:53:28.380464 kernel: BTRFS info (device vda6): last unmount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:53:28.396983 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 23:53:28.398954 ignition[949]: INFO : Ignition 2.21.0 Jul 10 23:53:28.398954 ignition[949]: INFO : Stage: mount Jul 10 23:53:28.398954 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:53:28.398954 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:53:28.401765 ignition[949]: INFO : mount: mount passed Jul 10 23:53:28.401765 ignition[949]: INFO : Ignition finished successfully Jul 10 23:53:28.402482 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 23:53:28.404405 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 23:53:28.803241 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 23:53:28.804800 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:53:28.834944 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (961) Jul 10 23:53:28.834996 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:53:28.835960 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:53:28.835978 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:53:28.839222 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:53:28.867207 ignition[978]: INFO : Ignition 2.21.0 Jul 10 23:53:28.867207 ignition[978]: INFO : Stage: files Jul 10 23:53:28.869718 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:53:28.869718 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:53:28.869718 ignition[978]: DEBUG : files: compiled without relabeling support, skipping Jul 10 23:53:28.873391 ignition[978]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 23:53:28.873391 ignition[978]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 23:53:28.876719 ignition[978]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 23:53:28.878073 ignition[978]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 23:53:28.878073 ignition[978]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 23:53:28.877315 unknown[978]: wrote ssh authorized keys file for user: core Jul 10 23:53:28.881757 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 23:53:28.881757 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 10 23:53:28.947068 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 23:53:29.300937 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 23:53:29.300937 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 23:53:29.304838 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 23:53:29.304838 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:53:29.304838 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:53:29.304838 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:53:29.304838 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:53:29.304838 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:53:29.304838 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:53:29.317098 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:53:29.317098 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:53:29.317098 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 23:53:29.317098 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 23:53:29.317098 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 23:53:29.317098 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 10 23:53:29.366568 systemd-networkd[795]: eth0: Gained IPv6LL Jul 10 23:53:29.667634 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 23:53:30.125966 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 23:53:30.125966 ignition[978]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 23:53:30.129036 ignition[978]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:53:30.130435 ignition[978]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:53:30.130435 ignition[978]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 23:53:30.130435 ignition[978]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 23:53:30.130435 ignition[978]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 23:53:30.130435 ignition[978]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 23:53:30.130435 ignition[978]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 23:53:30.130435 ignition[978]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 23:53:30.150351 ignition[978]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 23:53:30.154618 ignition[978]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 23:53:30.155761 ignition[978]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 23:53:30.155761 ignition[978]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 10 23:53:30.155761 ignition[978]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 23:53:30.155761 ignition[978]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:53:30.155761 ignition[978]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:53:30.155761 ignition[978]: INFO : files: files passed Jul 10 23:53:30.155761 ignition[978]: INFO : Ignition finished successfully Jul 10 23:53:30.157074 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 23:53:30.160161 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 23:53:30.163750 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 23:53:30.170419 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 23:53:30.172030 initrd-setup-root-after-ignition[1006]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 23:53:30.172316 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 23:53:30.176211 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:53:30.176211 initrd-setup-root-after-ignition[1008]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:53:30.179183 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:53:30.179030 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:53:30.180279 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 23:53:30.182955 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 23:53:30.236331 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 23:53:30.236478 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 23:53:30.238332 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 23:53:30.239949 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 23:53:30.241497 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 23:53:30.242298 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 23:53:30.273822 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:53:30.276034 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 23:53:30.296616 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:53:30.297859 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:53:30.299563 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 23:53:30.301160 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 23:53:30.301276 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:53:30.303523 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 23:53:30.305253 systemd[1]: Stopped target basic.target - Basic System. Jul 10 23:53:30.306664 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 23:53:30.308100 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:53:30.309714 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 23:53:30.311334 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 23:53:30.313027 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 23:53:30.314847 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:53:30.316444 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 23:53:30.318245 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 23:53:30.319759 systemd[1]: Stopped target swap.target - Swaps. Jul 10 23:53:30.321070 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 23:53:30.321185 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:53:30.323260 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:53:30.324146 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:53:30.325812 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 23:53:30.326508 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:53:30.327465 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 23:53:30.327577 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 23:53:30.330158 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 23:53:30.330272 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:53:30.332408 systemd[1]: Stopped target paths.target - Path Units. Jul 10 23:53:30.333705 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 23:53:30.333817 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:53:30.335368 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 23:53:30.337058 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 23:53:30.338576 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 23:53:30.338650 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:53:30.339918 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 23:53:30.339990 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:53:30.341718 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 23:53:30.341841 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:53:30.343722 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 23:53:30.343824 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 23:53:30.345886 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 23:53:30.347811 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 23:53:30.348510 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 23:53:30.348627 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:53:30.350109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 23:53:30.350207 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:53:30.355237 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 23:53:30.359645 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 23:53:30.367844 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 23:53:30.372107 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 23:53:30.372201 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 23:53:30.375307 ignition[1033]: INFO : Ignition 2.21.0 Jul 10 23:53:30.375307 ignition[1033]: INFO : Stage: umount Jul 10 23:53:30.375307 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:53:30.375307 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:53:30.375307 ignition[1033]: INFO : umount: umount passed Jul 10 23:53:30.375307 ignition[1033]: INFO : Ignition finished successfully Jul 10 23:53:30.375874 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 23:53:30.375964 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 23:53:30.377056 systemd[1]: Stopped target network.target - Network. Jul 10 23:53:30.378308 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 23:53:30.378367 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 23:53:30.379819 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 23:53:30.379858 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 23:53:30.381733 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 23:53:30.381793 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 23:53:30.383012 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 23:53:30.383046 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 23:53:30.384462 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 23:53:30.384505 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 23:53:30.386366 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 23:53:30.388145 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 23:53:30.397002 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 23:53:30.397113 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 23:53:30.399993 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 23:53:30.400225 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 23:53:30.400259 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:53:30.403372 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:53:30.403666 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 23:53:30.403789 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 23:53:30.407152 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 23:53:30.407555 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 23:53:30.409301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 23:53:30.409384 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:53:30.411948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 23:53:30.412585 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 23:53:30.412632 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:53:30.414389 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:53:30.414428 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:53:30.418303 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 23:53:30.418352 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 23:53:30.421565 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:53:30.423489 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 23:53:30.439058 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 23:53:30.446574 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:53:30.447794 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 23:53:30.447883 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 23:53:30.449089 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 23:53:30.449154 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 23:53:30.450297 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 23:53:30.450328 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:53:30.451603 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 23:53:30.451646 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:53:30.453545 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 23:53:30.453594 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 23:53:30.455486 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:53:30.455532 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:53:30.458331 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 23:53:30.459690 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 23:53:30.459741 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:53:30.461911 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 23:53:30.461960 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:53:30.464364 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 23:53:30.464407 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:53:30.466199 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 23:53:30.466240 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:53:30.467933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:53:30.467975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:53:30.472308 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 23:53:30.472422 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 23:53:30.473931 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 23:53:30.475937 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 23:53:30.493960 systemd[1]: Switching root. Jul 10 23:53:30.518456 systemd-journald[244]: Journal stopped Jul 10 23:53:31.304097 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 10 23:53:31.304159 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 23:53:31.304171 kernel: SELinux: policy capability open_perms=1 Jul 10 23:53:31.304180 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 23:53:31.304193 kernel: SELinux: policy capability always_check_network=0 Jul 10 23:53:31.304208 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 23:53:31.304221 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 23:53:31.304231 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 23:53:31.304240 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 23:53:31.304249 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 23:53:31.304262 kernel: audit: type=1403 audit(1752191610.684:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 23:53:31.304276 systemd[1]: Successfully loaded SELinux policy in 56.508ms. Jul 10 23:53:31.304288 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.791ms. Jul 10 23:53:31.304300 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:53:31.304311 systemd[1]: Detected virtualization kvm. Jul 10 23:53:31.304323 systemd[1]: Detected architecture arm64. Jul 10 23:53:31.304335 systemd[1]: Detected first boot. Jul 10 23:53:31.304345 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:53:31.304354 zram_generator::config[1079]: No configuration found. Jul 10 23:53:31.304369 kernel: NET: Registered PF_VSOCK protocol family Jul 10 23:53:31.304378 systemd[1]: Populated /etc with preset unit settings. Jul 10 23:53:31.304394 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 23:53:31.304404 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 23:53:31.304414 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 23:53:31.304424 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 23:53:31.304452 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 23:53:31.304464 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 23:53:31.304474 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 23:53:31.304484 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 23:53:31.304494 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 23:53:31.304505 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 23:53:31.304515 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 23:53:31.304526 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 23:53:31.304537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:53:31.304548 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:53:31.304558 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 23:53:31.304572 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 23:53:31.304582 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 23:53:31.304592 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:53:31.304602 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 23:53:31.304612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:53:31.304624 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:53:31.304634 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 23:53:31.304644 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 23:53:31.304654 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 23:53:31.304664 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 23:53:31.304673 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:53:31.304684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:53:31.304694 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:53:31.304704 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:53:31.304716 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 23:53:31.304726 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 23:53:31.304735 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 23:53:31.304745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:53:31.304755 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:53:31.304774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:53:31.304786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 23:53:31.304795 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 23:53:31.304805 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 23:53:31.304817 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 23:53:31.304827 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 23:53:31.304837 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 23:53:31.304847 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 23:53:31.304857 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 23:53:31.304867 systemd[1]: Reached target machines.target - Containers. Jul 10 23:53:31.304877 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 23:53:31.304887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:53:31.304900 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:53:31.304910 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 23:53:31.304920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:53:31.304929 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:53:31.304939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:53:31.304950 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 23:53:31.304960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:53:31.304970 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 23:53:31.304980 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 23:53:31.304992 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 23:53:31.305002 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 23:53:31.305012 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 23:53:31.305022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:53:31.305032 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:53:31.305042 kernel: fuse: init (API version 7.41) Jul 10 23:53:31.305053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:53:31.305062 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:53:31.305072 kernel: loop: module loaded Jul 10 23:53:31.305084 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 23:53:31.305094 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 23:53:31.305103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:53:31.305114 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 23:53:31.305125 kernel: ACPI: bus type drm_connector registered Jul 10 23:53:31.305135 systemd[1]: Stopped verity-setup.service. Jul 10 23:53:31.305144 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 23:53:31.305155 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 23:53:31.305164 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 23:53:31.305175 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 23:53:31.305184 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 23:53:31.305194 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 23:53:31.305234 systemd-journald[1144]: Collecting audit messages is disabled. Jul 10 23:53:31.305258 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:53:31.305268 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 23:53:31.305278 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 23:53:31.305288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:53:31.305299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:53:31.305310 systemd-journald[1144]: Journal started Jul 10 23:53:31.305331 systemd-journald[1144]: Runtime Journal (/run/log/journal/043cc503cd2b400ebc0357d3ccd5f40f) is 6M, max 48.5M, 42.4M free. Jul 10 23:53:31.080138 systemd[1]: Queued start job for default target multi-user.target. Jul 10 23:53:31.104345 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 23:53:31.104739 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 23:53:31.307839 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:53:31.308699 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:53:31.309474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:53:31.310649 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 23:53:31.311756 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:53:31.311956 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:53:31.313104 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 23:53:31.313264 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 23:53:31.314400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:53:31.314597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:53:31.315769 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:53:31.317125 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:53:31.318359 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 23:53:31.319690 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 23:53:31.333862 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:53:31.336575 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 23:53:31.338886 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 23:53:31.340084 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 23:53:31.340134 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:53:31.342322 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 23:53:31.347605 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 23:53:31.348564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:53:31.350721 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 23:53:31.352559 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 23:53:31.353523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:53:31.357586 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 23:53:31.358512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:53:31.359651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:53:31.361647 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 23:53:31.369543 systemd-journald[1144]: Time spent on flushing to /var/log/journal/043cc503cd2b400ebc0357d3ccd5f40f is 23.966ms for 882 entries. Jul 10 23:53:31.369543 systemd-journald[1144]: System Journal (/var/log/journal/043cc503cd2b400ebc0357d3ccd5f40f) is 8M, max 195.6M, 187.6M free. Jul 10 23:53:31.402057 systemd-journald[1144]: Received client request to flush runtime journal. Jul 10 23:53:31.402097 kernel: loop0: detected capacity change from 0 to 138376 Jul 10 23:53:31.402110 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 23:53:31.365946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:53:31.368851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:53:31.373895 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 23:53:31.375445 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 23:53:31.387483 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 23:53:31.389844 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 23:53:31.394163 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 23:53:31.409384 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 23:53:31.411256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:53:31.417497 kernel: loop1: detected capacity change from 0 to 203944 Jul 10 23:53:31.434334 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jul 10 23:53:31.434350 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jul 10 23:53:31.441990 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:53:31.444050 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 23:53:31.449816 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 23:53:31.471575 kernel: loop2: detected capacity change from 0 to 107312 Jul 10 23:53:31.485251 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 23:53:31.488874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:53:31.498660 kernel: loop3: detected capacity change from 0 to 138376 Jul 10 23:53:31.511468 kernel: loop4: detected capacity change from 0 to 203944 Jul 10 23:53:31.514564 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 10 23:53:31.514588 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 10 23:53:31.517469 kernel: loop5: detected capacity change from 0 to 107312 Jul 10 23:53:31.520683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:53:31.524017 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 23:53:31.524393 (sd-merge)[1220]: Merged extensions into '/usr'. Jul 10 23:53:31.527816 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 23:53:31.527829 systemd[1]: Reloading... Jul 10 23:53:31.591484 zram_generator::config[1252]: No configuration found. Jul 10 23:53:31.667033 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:53:31.668163 ldconfig[1190]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 23:53:31.732581 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 23:53:31.732740 systemd[1]: Reloading finished in 204 ms. Jul 10 23:53:31.765468 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 23:53:31.766908 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 23:53:31.786053 systemd[1]: Starting ensure-sysext.service... Jul 10 23:53:31.787697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:53:31.803870 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 23:53:31.803908 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 23:53:31.804139 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 23:53:31.804332 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 23:53:31.804977 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 23:53:31.805187 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 10 23:53:31.805202 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jul 10 23:53:31.805217 systemd[1]: Reloading... Jul 10 23:53:31.805237 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 10 23:53:31.807478 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:53:31.807489 systemd-tmpfiles[1284]: Skipping /boot Jul 10 23:53:31.816298 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:53:31.816314 systemd-tmpfiles[1284]: Skipping /boot Jul 10 23:53:31.853491 zram_generator::config[1311]: No configuration found. Jul 10 23:53:31.920749 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:53:31.981902 systemd[1]: Reloading finished in 176 ms. Jul 10 23:53:32.000072 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 23:53:32.005528 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:53:32.014644 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:53:32.016503 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 23:53:32.018291 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 23:53:32.023100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:53:32.026577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:53:32.028938 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 23:53:32.036310 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 23:53:32.039747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:53:32.043823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:53:32.047648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:53:32.050266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:53:32.051134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:53:32.051275 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:53:32.057902 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 23:53:32.062808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:53:32.063108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:53:32.064970 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:53:32.065177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:53:32.066820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:53:32.067029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:53:32.072014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:53:32.073790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:53:32.076822 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jul 10 23:53:32.077658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:53:32.079597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:53:32.080399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:53:32.080574 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:53:32.087099 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 23:53:32.089532 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 23:53:32.091381 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 23:53:32.093315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:53:32.093503 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:53:32.095176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:53:32.095375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:53:32.097137 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:53:32.097323 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:53:32.099616 augenrules[1384]: No rules Jul 10 23:53:32.099961 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 23:53:32.103018 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:53:32.103234 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:53:32.108306 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:53:32.109980 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 23:53:32.129872 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:53:32.130658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:53:32.132559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:53:32.136011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:53:32.138106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:53:32.141467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:53:32.142863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:53:32.142911 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:53:32.145725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:53:32.146844 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 23:53:32.157474 systemd[1]: Finished ensure-sysext.service. Jul 10 23:53:32.158792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:53:32.159015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:53:32.160486 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:53:32.160681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:53:32.161974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:53:32.162119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:53:32.163533 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:53:32.163679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:53:32.173485 augenrules[1426]: /sbin/augenrules: No change Jul 10 23:53:32.173891 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:53:32.173947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:53:32.179248 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 23:53:32.182208 augenrules[1456]: No rules Jul 10 23:53:32.186615 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:53:32.186852 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:53:32.251494 systemd-resolved[1351]: Positive Trust Anchors: Jul 10 23:53:32.251512 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:53:32.251544 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:53:32.266426 systemd-resolved[1351]: Defaulting to hostname 'linux'. Jul 10 23:53:32.270160 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 23:53:32.279949 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:53:32.280978 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 23:53:32.281962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:53:32.282849 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:53:32.283670 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 23:53:32.284629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 23:53:32.285568 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 23:53:32.286475 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 23:53:32.286502 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:53:32.287135 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 23:53:32.288028 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 23:53:32.288873 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 23:53:32.289741 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:53:32.291266 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 23:53:32.293360 systemd-networkd[1433]: lo: Link UP Jul 10 23:53:32.293371 systemd-networkd[1433]: lo: Gained carrier Jul 10 23:53:32.293391 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 23:53:32.296980 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 23:53:32.297701 systemd-networkd[1433]: Enumeration completed Jul 10 23:53:32.298568 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 23:53:32.299482 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 23:53:32.302523 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 23:53:32.303939 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 23:53:32.305005 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:53:32.305015 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:53:32.305408 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:53:32.305647 systemd-networkd[1433]: eth0: Link UP Jul 10 23:53:32.305774 systemd-networkd[1433]: eth0: Gained carrier Jul 10 23:53:32.305793 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:53:32.307063 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 23:53:32.310011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 23:53:32.313000 systemd[1]: Reached target network.target - Network. Jul 10 23:53:32.313679 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:53:32.314343 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:53:32.315074 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:53:32.315101 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:53:32.318554 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 23:53:32.321514 systemd-networkd[1433]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 23:53:32.321653 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 23:53:32.323650 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 23:53:32.324627 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Jul 10 23:53:31.841997 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 23:53:31.846582 systemd-journald[1144]: Time jumped backwards, rotating. Jul 10 23:53:31.842048 systemd-timesyncd[1455]: Initial clock synchronization to Thu 2025-07-10 23:53:31.841897 UTC. Jul 10 23:53:31.842512 systemd-resolved[1351]: Clock change detected. Flushing caches. Jul 10 23:53:31.842685 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 23:53:31.850335 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 23:53:31.851540 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 23:53:31.852548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 23:53:31.855345 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 23:53:31.857582 jq[1477]: false Jul 10 23:53:31.858341 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 23:53:31.860737 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 23:53:31.863531 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 23:53:31.866889 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 23:53:31.871280 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 23:53:31.875330 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 23:53:31.877112 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 23:53:31.877540 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 23:53:31.882410 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 23:53:31.885404 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 23:53:31.889214 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 23:53:31.891513 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 23:53:31.891687 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 23:53:31.892534 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 23:53:31.892700 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 23:53:31.901288 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 23:53:31.901461 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 23:53:31.916610 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 23:53:31.923258 jq[1504]: true Jul 10 23:53:31.925820 extend-filesystems[1479]: Found /dev/vda6 Jul 10 23:53:31.939994 extend-filesystems[1479]: Found /dev/vda9 Jul 10 23:53:31.940861 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 23:53:31.942917 dbus-daemon[1475]: [system] SELinux support is enabled Jul 10 23:53:31.949984 extend-filesystems[1479]: Checking size of /dev/vda9 Jul 10 23:53:31.943205 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 23:53:31.950687 tar[1508]: linux-arm64/helm Jul 10 23:53:31.944247 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 23:53:31.948759 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 23:53:31.948789 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 23:53:31.951043 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 23:53:31.951070 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 23:53:31.964499 extend-filesystems[1479]: Resized partition /dev/vda9 Jul 10 23:53:31.969679 extend-filesystems[1534]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 23:53:31.971891 jq[1520]: true Jul 10 23:53:31.975186 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 23:53:31.993298 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 23:53:32.004927 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 23:53:32.004927 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 23:53:32.004927 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 23:53:32.008869 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Jul 10 23:53:32.012461 update_engine[1498]: I20250710 23:53:32.012264 1498 main.cc:92] Flatcar Update Engine starting Jul 10 23:53:32.013213 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 23:53:32.013483 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 23:53:32.015625 systemd[1]: Started update-engine.service - Update Engine. Jul 10 23:53:32.019834 update_engine[1498]: I20250710 23:53:32.015679 1498 update_check_scheduler.cc:74] Next update check in 2m55s Jul 10 23:53:32.021879 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 23:53:32.067020 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:53:32.073454 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:53:32.078840 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 23:53:32.082412 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 23:53:32.099038 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 23:53:32.101318 systemd-logind[1488]: New seat seat0. Jul 10 23:53:32.102787 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 23:53:32.139319 locksmithd[1548]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 23:53:32.166215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:53:32.186018 containerd[1514]: time="2025-07-10T23:53:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 23:53:32.187886 containerd[1514]: time="2025-07-10T23:53:32.187843271Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 23:53:32.196650 containerd[1514]: time="2025-07-10T23:53:32.196595991Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.92µs" Jul 10 23:53:32.196650 containerd[1514]: time="2025-07-10T23:53:32.196639471Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 23:53:32.196650 containerd[1514]: time="2025-07-10T23:53:32.196659471Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 23:53:32.196861 containerd[1514]: time="2025-07-10T23:53:32.196837391Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 23:53:32.196885 containerd[1514]: time="2025-07-10T23:53:32.196860511Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 23:53:32.196922 containerd[1514]: time="2025-07-10T23:53:32.196886271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197538 containerd[1514]: time="2025-07-10T23:53:32.196937991Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197538 containerd[1514]: time="2025-07-10T23:53:32.196953071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197538 containerd[1514]: time="2025-07-10T23:53:32.197215951Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197538 containerd[1514]: time="2025-07-10T23:53:32.197233351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197538 containerd[1514]: time="2025-07-10T23:53:32.197244911Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197538 containerd[1514]: time="2025-07-10T23:53:32.197252831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197538 containerd[1514]: time="2025-07-10T23:53:32.197341671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197722 containerd[1514]: time="2025-07-10T23:53:32.197562511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197722 containerd[1514]: time="2025-07-10T23:53:32.197596911Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 23:53:32.197722 containerd[1514]: time="2025-07-10T23:53:32.197616191Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 23:53:32.198254 containerd[1514]: time="2025-07-10T23:53:32.198164431Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 23:53:32.198523 containerd[1514]: time="2025-07-10T23:53:32.198499471Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 23:53:32.198608 containerd[1514]: time="2025-07-10T23:53:32.198589031Z" level=info msg="metadata content store policy set" policy=shared Jul 10 23:53:32.202148 containerd[1514]: time="2025-07-10T23:53:32.202064351Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 23:53:32.202148 containerd[1514]: time="2025-07-10T23:53:32.202125111Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 23:53:32.202148 containerd[1514]: time="2025-07-10T23:53:32.202147391Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 23:53:32.202320 containerd[1514]: time="2025-07-10T23:53:32.202161591Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 23:53:32.202320 containerd[1514]: time="2025-07-10T23:53:32.202195271Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 23:53:32.202320 containerd[1514]: time="2025-07-10T23:53:32.202260271Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 23:53:32.202320 containerd[1514]: time="2025-07-10T23:53:32.202272391Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 23:53:32.202320 containerd[1514]: time="2025-07-10T23:53:32.202284071Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 23:53:32.202320 containerd[1514]: time="2025-07-10T23:53:32.202296871Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 23:53:32.202320 containerd[1514]: time="2025-07-10T23:53:32.202307391Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 23:53:32.202449 containerd[1514]: time="2025-07-10T23:53:32.202316871Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 23:53:32.202449 containerd[1514]: time="2025-07-10T23:53:32.202381751Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 23:53:32.202631 containerd[1514]: time="2025-07-10T23:53:32.202586631Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 23:53:32.202631 containerd[1514]: time="2025-07-10T23:53:32.202624471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 23:53:32.202677 containerd[1514]: time="2025-07-10T23:53:32.202642031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 23:53:32.202677 containerd[1514]: time="2025-07-10T23:53:32.202652751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 23:53:32.202677 containerd[1514]: time="2025-07-10T23:53:32.202663391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 23:53:32.202677 containerd[1514]: time="2025-07-10T23:53:32.202673351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 23:53:32.202743 containerd[1514]: time="2025-07-10T23:53:32.202684271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 23:53:32.202743 containerd[1514]: time="2025-07-10T23:53:32.202695231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 23:53:32.202743 containerd[1514]: time="2025-07-10T23:53:32.202706191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 23:53:32.202743 containerd[1514]: time="2025-07-10T23:53:32.202717791Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 23:53:32.202805 containerd[1514]: time="2025-07-10T23:53:32.202780111Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 23:53:32.203042 containerd[1514]: time="2025-07-10T23:53:32.203011391Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 23:53:32.203078 containerd[1514]: time="2025-07-10T23:53:32.203063311Z" level=info msg="Start snapshots syncer" Jul 10 23:53:32.203114 containerd[1514]: time="2025-07-10T23:53:32.203101591Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 23:53:32.203635 containerd[1514]: time="2025-07-10T23:53:32.203583751Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 23:53:32.203738 containerd[1514]: time="2025-07-10T23:53:32.203649471Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 23:53:32.203807 containerd[1514]: time="2025-07-10T23:53:32.203782591Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 23:53:32.203957 containerd[1514]: time="2025-07-10T23:53:32.203924351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 23:53:32.204048 containerd[1514]: time="2025-07-10T23:53:32.204028671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 23:53:32.204071 containerd[1514]: time="2025-07-10T23:53:32.204052711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 23:53:32.204071 containerd[1514]: time="2025-07-10T23:53:32.204065591Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 23:53:32.204115 containerd[1514]: time="2025-07-10T23:53:32.204078311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 23:53:32.204115 containerd[1514]: time="2025-07-10T23:53:32.204089871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 23:53:32.204115 containerd[1514]: time="2025-07-10T23:53:32.204100591Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 23:53:32.204164 containerd[1514]: time="2025-07-10T23:53:32.204126351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 23:53:32.204164 containerd[1514]: time="2025-07-10T23:53:32.204137631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 23:53:32.204164 containerd[1514]: time="2025-07-10T23:53:32.204152711Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 23:53:32.204383 containerd[1514]: time="2025-07-10T23:53:32.204363431Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 23:53:32.204413 containerd[1514]: time="2025-07-10T23:53:32.204396831Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 23:53:32.204413 containerd[1514]: time="2025-07-10T23:53:32.204408871Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 23:53:32.204455 containerd[1514]: time="2025-07-10T23:53:32.204418351Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 23:53:32.204455 containerd[1514]: time="2025-07-10T23:53:32.204426871Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 23:53:32.204455 containerd[1514]: time="2025-07-10T23:53:32.204436311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 23:53:32.204573 containerd[1514]: time="2025-07-10T23:53:32.204510551Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 23:53:32.204628 containerd[1514]: time="2025-07-10T23:53:32.204612951Z" level=info msg="runtime interface created" Jul 10 23:53:32.204628 containerd[1514]: time="2025-07-10T23:53:32.204623511Z" level=info msg="created NRI interface" Jul 10 23:53:32.204672 containerd[1514]: time="2025-07-10T23:53:32.204639031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 23:53:32.204672 containerd[1514]: time="2025-07-10T23:53:32.204651911Z" level=info msg="Connect containerd service" Jul 10 23:53:32.204751 containerd[1514]: time="2025-07-10T23:53:32.204732511Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 23:53:32.205959 containerd[1514]: time="2025-07-10T23:53:32.205919871Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:53:32.309297 containerd[1514]: time="2025-07-10T23:53:32.309160951Z" level=info msg="Start subscribing containerd event" Jul 10 23:53:32.309297 containerd[1514]: time="2025-07-10T23:53:32.309261151Z" level=info msg="Start recovering state" Jul 10 23:53:32.309426 containerd[1514]: time="2025-07-10T23:53:32.309349071Z" level=info msg="Start event monitor" Jul 10 23:53:32.309426 containerd[1514]: time="2025-07-10T23:53:32.309366231Z" level=info msg="Start cni network conf syncer for default" Jul 10 23:53:32.309426 containerd[1514]: time="2025-07-10T23:53:32.309375431Z" level=info msg="Start streaming server" Jul 10 23:53:32.309426 containerd[1514]: time="2025-07-10T23:53:32.309384631Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 23:53:32.309426 containerd[1514]: time="2025-07-10T23:53:32.309391231Z" level=info msg="runtime interface starting up..." Jul 10 23:53:32.309426 containerd[1514]: time="2025-07-10T23:53:32.309396231Z" level=info msg="starting plugins..." Jul 10 23:53:32.309426 containerd[1514]: time="2025-07-10T23:53:32.309419591Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 23:53:32.309769 containerd[1514]: time="2025-07-10T23:53:32.309747671Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 23:53:32.309862 containerd[1514]: time="2025-07-10T23:53:32.309804951Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 23:53:32.311040 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 23:53:32.313498 containerd[1514]: time="2025-07-10T23:53:32.312265511Z" level=info msg="containerd successfully booted in 0.126590s" Jul 10 23:53:32.435475 tar[1508]: linux-arm64/LICENSE Jul 10 23:53:32.435581 tar[1508]: linux-arm64/README.md Jul 10 23:53:32.452185 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 23:53:32.488549 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 23:53:32.507321 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 23:53:32.511678 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 23:53:32.537891 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 23:53:32.538140 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 23:53:32.540837 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 23:53:32.565938 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 23:53:32.568674 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 23:53:32.570616 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 23:53:32.571573 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 23:53:33.171321 systemd-networkd[1433]: eth0: Gained IPv6LL Jul 10 23:53:33.173810 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 23:53:33.177284 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 23:53:33.179725 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 23:53:33.181961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:53:33.183883 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 23:53:33.214291 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 23:53:33.215894 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 23:53:33.216089 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 23:53:33.217953 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 23:53:33.727750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:53:33.729058 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 23:53:33.730017 systemd[1]: Startup finished in 2.121s (kernel) + 5.059s (initrd) + 3.585s (userspace) = 10.766s. Jul 10 23:53:33.731447 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:53:34.156012 kubelet[1636]: E0710 23:53:34.155886 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:53:34.158397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:53:34.158546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:53:34.158848 systemd[1]: kubelet.service: Consumed 834ms CPU time, 257.3M memory peak. Jul 10 23:53:38.398626 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 23:53:38.399724 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:60318.service - OpenSSH per-connection server daemon (10.0.0.1:60318). Jul 10 23:53:38.482808 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 60318 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:53:38.484648 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:53:38.490452 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 23:53:38.491350 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 23:53:38.496676 systemd-logind[1488]: New session 1 of user core. Jul 10 23:53:38.515124 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 23:53:38.517569 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 23:53:38.532001 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 23:53:38.533956 systemd-logind[1488]: New session c1 of user core. Jul 10 23:53:38.639158 systemd[1653]: Queued start job for default target default.target. Jul 10 23:53:38.659072 systemd[1653]: Created slice app.slice - User Application Slice. Jul 10 23:53:38.659101 systemd[1653]: Reached target paths.target - Paths. Jul 10 23:53:38.659137 systemd[1653]: Reached target timers.target - Timers. Jul 10 23:53:38.660325 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 23:53:38.669153 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 23:53:38.669231 systemd[1653]: Reached target sockets.target - Sockets. Jul 10 23:53:38.669268 systemd[1653]: Reached target basic.target - Basic System. Jul 10 23:53:38.669295 systemd[1653]: Reached target default.target - Main User Target. Jul 10 23:53:38.669320 systemd[1653]: Startup finished in 130ms. Jul 10 23:53:38.669495 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 23:53:38.670782 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 23:53:38.730587 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:60330.service - OpenSSH per-connection server daemon (10.0.0.1:60330). Jul 10 23:53:38.787875 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 60330 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:53:38.789097 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:53:38.793362 systemd-logind[1488]: New session 2 of user core. Jul 10 23:53:38.805295 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 23:53:38.856809 sshd[1666]: Connection closed by 10.0.0.1 port 60330 Jul 10 23:53:38.856676 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 10 23:53:38.877225 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:60330.service: Deactivated successfully. Jul 10 23:53:38.878627 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 23:53:38.879236 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. Jul 10 23:53:38.882136 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:60332.service - OpenSSH per-connection server daemon (10.0.0.1:60332). Jul 10 23:53:38.882613 systemd-logind[1488]: Removed session 2. Jul 10 23:53:38.930356 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 60332 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:53:38.931408 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:53:38.935965 systemd-logind[1488]: New session 3 of user core. Jul 10 23:53:38.943316 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 23:53:38.991159 sshd[1674]: Connection closed by 10.0.0.1 port 60332 Jul 10 23:53:38.991485 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Jul 10 23:53:39.006147 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:60332.service: Deactivated successfully. Jul 10 23:53:39.007508 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 23:53:39.008115 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. Jul 10 23:53:39.010223 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:60346.service - OpenSSH per-connection server daemon (10.0.0.1:60346). Jul 10 23:53:39.011015 systemd-logind[1488]: Removed session 3. Jul 10 23:53:39.056326 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 60346 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:53:39.057602 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:53:39.061534 systemd-logind[1488]: New session 4 of user core. Jul 10 23:53:39.074315 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 23:53:39.124942 sshd[1682]: Connection closed by 10.0.0.1 port 60346 Jul 10 23:53:39.124806 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jul 10 23:53:39.137100 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:60346.service: Deactivated successfully. Jul 10 23:53:39.139434 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 23:53:39.141919 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. Jul 10 23:53:39.143332 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:60360.service - OpenSSH per-connection server daemon (10.0.0.1:60360). Jul 10 23:53:39.144121 systemd-logind[1488]: Removed session 4. Jul 10 23:53:39.182482 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 60360 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:53:39.183415 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:53:39.187388 systemd-logind[1488]: New session 5 of user core. Jul 10 23:53:39.197322 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 23:53:39.258148 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 23:53:39.258487 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:53:39.272850 sudo[1691]: pam_unix(sudo:session): session closed for user root Jul 10 23:53:39.274732 sshd[1690]: Connection closed by 10.0.0.1 port 60360 Jul 10 23:53:39.274638 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Jul 10 23:53:39.292214 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:60360.service: Deactivated successfully. Jul 10 23:53:39.294468 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 23:53:39.295150 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. Jul 10 23:53:39.297670 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:60370.service - OpenSSH per-connection server daemon (10.0.0.1:60370). Jul 10 23:53:39.298096 systemd-logind[1488]: Removed session 5. Jul 10 23:53:39.350049 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 60370 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:53:39.351292 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:53:39.355028 systemd-logind[1488]: New session 6 of user core. Jul 10 23:53:39.367324 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 23:53:39.417344 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 23:53:39.417628 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:53:39.488404 sudo[1701]: pam_unix(sudo:session): session closed for user root Jul 10 23:53:39.493433 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 23:53:39.493702 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:53:39.503103 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:53:39.552677 augenrules[1723]: No rules Jul 10 23:53:39.553824 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:53:39.554058 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:53:39.556153 sudo[1700]: pam_unix(sudo:session): session closed for user root Jul 10 23:53:39.557317 sshd[1699]: Connection closed by 10.0.0.1 port 60370 Jul 10 23:53:39.557882 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jul 10 23:53:39.568247 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:60370.service: Deactivated successfully. Jul 10 23:53:39.570572 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 23:53:39.571985 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Jul 10 23:53:39.574353 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:60372.service - OpenSSH per-connection server daemon (10.0.0.1:60372). Jul 10 23:53:39.574973 systemd-logind[1488]: Removed session 6. Jul 10 23:53:39.625746 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 60372 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:53:39.627127 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:53:39.631957 systemd-logind[1488]: New session 7 of user core. Jul 10 23:53:39.647355 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 23:53:39.700348 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 23:53:39.700627 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:53:40.055859 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 23:53:40.075638 (dockerd)[1756]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 23:53:40.351219 dockerd[1756]: time="2025-07-10T23:53:40.351077271Z" level=info msg="Starting up" Jul 10 23:53:40.351886 dockerd[1756]: time="2025-07-10T23:53:40.351864831Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 23:53:40.393964 dockerd[1756]: time="2025-07-10T23:53:40.393843791Z" level=info msg="Loading containers: start." Jul 10 23:53:40.404183 kernel: Initializing XFRM netlink socket Jul 10 23:53:40.594987 systemd-networkd[1433]: docker0: Link UP Jul 10 23:53:40.598330 dockerd[1756]: time="2025-07-10T23:53:40.598291151Z" level=info msg="Loading containers: done." Jul 10 23:53:40.609893 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1234761585-merged.mount: Deactivated successfully. Jul 10 23:53:40.612258 dockerd[1756]: time="2025-07-10T23:53:40.612214671Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 23:53:40.612321 dockerd[1756]: time="2025-07-10T23:53:40.612292631Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 23:53:40.612425 dockerd[1756]: time="2025-07-10T23:53:40.612399551Z" level=info msg="Initializing buildkit" Jul 10 23:53:40.637155 dockerd[1756]: time="2025-07-10T23:53:40.637115591Z" level=info msg="Completed buildkit initialization" Jul 10 23:53:40.641636 dockerd[1756]: time="2025-07-10T23:53:40.641597551Z" level=info msg="Daemon has completed initialization" Jul 10 23:53:40.641734 dockerd[1756]: time="2025-07-10T23:53:40.641666031Z" level=info msg="API listen on /run/docker.sock" Jul 10 23:53:40.641800 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 23:53:41.271193 containerd[1514]: time="2025-07-10T23:53:41.271126671Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 23:53:41.821817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633545946.mount: Deactivated successfully. Jul 10 23:53:42.779364 containerd[1514]: time="2025-07-10T23:53:42.779308991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:42.779867 containerd[1514]: time="2025-07-10T23:53:42.779831791Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 10 23:53:42.780642 containerd[1514]: time="2025-07-10T23:53:42.780615351Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:42.782907 containerd[1514]: time="2025-07-10T23:53:42.782869751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:42.784028 containerd[1514]: time="2025-07-10T23:53:42.783999431Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.5128288s" Jul 10 23:53:42.784028 containerd[1514]: time="2025-07-10T23:53:42.784033671Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 10 23:53:42.786926 containerd[1514]: time="2025-07-10T23:53:42.786857511Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 23:53:43.901503 containerd[1514]: time="2025-07-10T23:53:43.901451991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:43.902723 containerd[1514]: time="2025-07-10T23:53:43.902690551Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 10 23:53:43.903757 containerd[1514]: time="2025-07-10T23:53:43.903705711Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:43.906184 containerd[1514]: time="2025-07-10T23:53:43.906140551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:43.907302 containerd[1514]: time="2025-07-10T23:53:43.907259471Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.1203676s" Jul 10 23:53:43.907302 containerd[1514]: time="2025-07-10T23:53:43.907293831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 10 23:53:43.907754 containerd[1514]: time="2025-07-10T23:53:43.907724991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 23:53:44.201587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 23:53:44.202862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:53:44.318855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:53:44.321844 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:53:44.353385 kubelet[2030]: E0710 23:53:44.353319 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:53:44.356310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:53:44.356439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:53:44.357298 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.1M memory peak. Jul 10 23:53:45.050598 containerd[1514]: time="2025-07-10T23:53:45.050549631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:45.051876 containerd[1514]: time="2025-07-10T23:53:45.051844431Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 10 23:53:45.053192 containerd[1514]: time="2025-07-10T23:53:45.052895951Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:45.055013 containerd[1514]: time="2025-07-10T23:53:45.054982951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:45.056037 containerd[1514]: time="2025-07-10T23:53:45.056011111Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.14814696s" Jul 10 23:53:45.056230 containerd[1514]: time="2025-07-10T23:53:45.056110031Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 10 23:53:45.056607 containerd[1514]: time="2025-07-10T23:53:45.056581271Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 23:53:45.993667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948924883.mount: Deactivated successfully. Jul 10 23:53:46.204611 containerd[1514]: time="2025-07-10T23:53:46.204559311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:46.205473 containerd[1514]: time="2025-07-10T23:53:46.205435471Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 10 23:53:46.206023 containerd[1514]: time="2025-07-10T23:53:46.205999031Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:46.208141 containerd[1514]: time="2025-07-10T23:53:46.208104911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:46.208731 containerd[1514]: time="2025-07-10T23:53:46.208612111Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.15199944s" Jul 10 23:53:46.208731 containerd[1514]: time="2025-07-10T23:53:46.208641631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 10 23:53:46.209250 containerd[1514]: time="2025-07-10T23:53:46.209221831Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 23:53:46.786044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474165530.mount: Deactivated successfully. Jul 10 23:53:47.570394 containerd[1514]: time="2025-07-10T23:53:47.570283471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:47.571423 containerd[1514]: time="2025-07-10T23:53:47.571369551Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 10 23:53:47.572310 containerd[1514]: time="2025-07-10T23:53:47.572249911Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:47.575414 containerd[1514]: time="2025-07-10T23:53:47.575373591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:47.576535 containerd[1514]: time="2025-07-10T23:53:47.576489231Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.36723532s" Jul 10 23:53:47.576535 containerd[1514]: time="2025-07-10T23:53:47.576534151Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 23:53:47.577073 containerd[1514]: time="2025-07-10T23:53:47.577052111Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 23:53:48.012729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597860644.mount: Deactivated successfully. Jul 10 23:53:48.020802 containerd[1514]: time="2025-07-10T23:53:48.020519151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:53:48.021403 containerd[1514]: time="2025-07-10T23:53:48.021352231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 23:53:48.022158 containerd[1514]: time="2025-07-10T23:53:48.022118351Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:53:48.024008 containerd[1514]: time="2025-07-10T23:53:48.023967431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:53:48.024837 containerd[1514]: time="2025-07-10T23:53:48.024752711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 447.67236ms" Jul 10 23:53:48.024837 containerd[1514]: time="2025-07-10T23:53:48.024780951Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 23:53:48.025474 containerd[1514]: time="2025-07-10T23:53:48.025275511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 23:53:48.660699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289858681.mount: Deactivated successfully. Jul 10 23:53:50.331125 containerd[1514]: time="2025-07-10T23:53:50.330868871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:50.332085 containerd[1514]: time="2025-07-10T23:53:50.331783671Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 10 23:53:50.332889 containerd[1514]: time="2025-07-10T23:53:50.332846951Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:50.335525 containerd[1514]: time="2025-07-10T23:53:50.335488831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:53:50.336631 containerd[1514]: time="2025-07-10T23:53:50.336602271Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.31129596s" Jul 10 23:53:50.336675 containerd[1514]: time="2025-07-10T23:53:50.336630231Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 10 23:53:54.451613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 23:53:54.453422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:53:54.535819 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 23:53:54.535888 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 23:53:54.536136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:53:54.538990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:53:54.560674 systemd[1]: Reload requested from client PID 2195 ('systemctl') (unit session-7.scope)... Jul 10 23:53:54.560691 systemd[1]: Reloading... Jul 10 23:53:54.637203 zram_generator::config[2240]: No configuration found. Jul 10 23:53:54.724002 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:53:54.809568 systemd[1]: Reloading finished in 248 ms. Jul 10 23:53:54.873737 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 23:53:54.873817 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 23:53:54.874048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:53:54.874094 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95M memory peak. Jul 10 23:53:54.875705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:53:54.985753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:53:54.995516 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:53:55.051766 kubelet[2282]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:53:55.051766 kubelet[2282]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 23:53:55.051766 kubelet[2282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:53:55.052089 kubelet[2282]: I0710 23:53:55.051824 2282 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:53:55.942199 kubelet[2282]: I0710 23:53:55.942128 2282 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 23:53:55.942199 kubelet[2282]: I0710 23:53:55.942165 2282 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:53:55.942455 kubelet[2282]: I0710 23:53:55.942426 2282 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 23:53:55.970958 kubelet[2282]: E0710 23:53:55.970921 2282 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:53:55.971864 kubelet[2282]: I0710 23:53:55.971842 2282 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:53:55.982363 kubelet[2282]: I0710 23:53:55.982320 2282 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 23:53:55.985794 kubelet[2282]: I0710 23:53:55.985756 2282 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:53:55.986521 kubelet[2282]: I0710 23:53:55.986492 2282 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 23:53:55.986678 kubelet[2282]: I0710 23:53:55.986639 2282 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:53:55.986850 kubelet[2282]: I0710 23:53:55.986672 2282 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:53:55.986850 kubelet[2282]: I0710 23:53:55.986854 2282 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:53:55.986959 kubelet[2282]: I0710 23:53:55.986863 2282 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 23:53:55.987130 kubelet[2282]: I0710 23:53:55.987096 2282 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:53:55.989644 kubelet[2282]: I0710 23:53:55.989599 2282 kubelet.go:408] "Attempting to sync node with API server" Jul 10 23:53:55.989644 kubelet[2282]: I0710 23:53:55.989631 2282 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:53:55.989722 kubelet[2282]: I0710 23:53:55.989656 2282 kubelet.go:314] "Adding apiserver pod source" Jul 10 23:53:55.990621 kubelet[2282]: I0710 23:53:55.989778 2282 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:53:55.990621 kubelet[2282]: W0710 23:53:55.990458 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jul 10 23:53:55.990621 kubelet[2282]: E0710 23:53:55.990512 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:53:55.991035 kubelet[2282]: W0710 23:53:55.990970 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jul 10 23:53:55.991035 kubelet[2282]: E0710 23:53:55.991014 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:53:55.993092 kubelet[2282]: I0710 23:53:55.993074 2282 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 23:53:55.993742 kubelet[2282]: I0710 23:53:55.993727 2282 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 23:53:55.995867 kubelet[2282]: W0710 23:53:55.995845 2282 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 23:53:55.996897 kubelet[2282]: I0710 23:53:55.996796 2282 server.go:1274] "Started kubelet" Jul 10 23:53:55.999376 kubelet[2282]: I0710 23:53:55.997471 2282 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:53:55.999376 kubelet[2282]: I0710 23:53:55.997727 2282 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:53:55.999376 kubelet[2282]: I0710 23:53:55.997267 2282 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:53:55.999376 kubelet[2282]: I0710 23:53:55.999011 2282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:53:55.999376 kubelet[2282]: I0710 23:53:55.999320 2282 server.go:449] "Adding debug handlers to kubelet server" Jul 10 23:53:56.002943 kubelet[2282]: I0710 23:53:56.002841 2282 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:53:56.002943 kubelet[2282]: I0710 23:53:56.002869 2282 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 23:53:56.003636 kubelet[2282]: W0710 23:53:56.003581 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jul 10 23:53:56.003687 kubelet[2282]: E0710 23:53:56.003636 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:53:56.004177 kubelet[2282]: E0710 23:53:56.003979 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Jul 10 23:53:56.004177 kubelet[2282]: I0710 23:53:56.002841 2282 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 23:53:56.004177 kubelet[2282]: I0710 23:53:56.004121 2282 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:53:56.009535 kubelet[2282]: E0710 23:53:56.005603 2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851090269ff8697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 23:53:55.996771991 +0000 UTC m=+0.998176161,LastTimestamp:2025-07-10 23:53:55.996771991 +0000 UTC m=+0.998176161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 23:53:56.010256 kubelet[2282]: I0710 23:53:56.010227 2282 factory.go:221] Registration of the systemd container factory successfully Jul 10 23:53:56.010348 kubelet[2282]: I0710 23:53:56.010311 2282 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:53:56.011076 kubelet[2282]: E0710 23:53:56.011034 2282 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:53:56.011939 kubelet[2282]: E0710 23:53:56.011907 2282 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:53:56.012650 kubelet[2282]: I0710 23:53:56.012630 2282 factory.go:221] Registration of the containerd container factory successfully Jul 10 23:53:56.019210 kubelet[2282]: I0710 23:53:56.019071 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 23:53:56.020122 kubelet[2282]: I0710 23:53:56.020104 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 23:53:56.020211 kubelet[2282]: I0710 23:53:56.020201 2282 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 23:53:56.020285 kubelet[2282]: I0710 23:53:56.020274 2282 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 23:53:56.020379 kubelet[2282]: E0710 23:53:56.020363 2282 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:53:56.024801 kubelet[2282]: W0710 23:53:56.024748 2282 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jul 10 23:53:56.024872 kubelet[2282]: E0710 23:53:56.024807 2282 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:53:56.024893 kubelet[2282]: I0710 23:53:56.024871 2282 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 23:53:56.024893 kubelet[2282]: I0710 23:53:56.024880 2282 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 23:53:56.024950 kubelet[2282]: I0710 23:53:56.024895 2282 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:53:56.027265 kubelet[2282]: I0710 23:53:56.027227 2282 policy_none.go:49] "None policy: Start" Jul 10 23:53:56.027730 kubelet[2282]: I0710 23:53:56.027717 2282 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 23:53:56.027766 kubelet[2282]: I0710 23:53:56.027737 2282 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:53:56.032703 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 23:53:56.046290 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 23:53:56.049693 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 23:53:56.060358 kubelet[2282]: I0710 23:53:56.059917 2282 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 23:53:56.060358 kubelet[2282]: I0710 23:53:56.060111 2282 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:53:56.060358 kubelet[2282]: I0710 23:53:56.060122 2282 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:53:56.060693 kubelet[2282]: I0710 23:53:56.060675 2282 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:53:56.062165 kubelet[2282]: E0710 23:53:56.062142 2282 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 23:53:56.129164 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 10 23:53:56.146310 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 10 23:53:56.158511 systemd[1]: Created slice kubepods-burstable-pod656ea27d0463be922d21d2e0350d0bcf.slice - libcontainer container kubepods-burstable-pod656ea27d0463be922d21d2e0350d0bcf.slice. Jul 10 23:53:56.161387 kubelet[2282]: I0710 23:53:56.161325 2282 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 23:53:56.161884 kubelet[2282]: E0710 23:53:56.161859 2282 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jul 10 23:53:56.204386 kubelet[2282]: I0710 23:53:56.204233 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:53:56.204386 kubelet[2282]: I0710 23:53:56.204271 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/656ea27d0463be922d21d2e0350d0bcf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"656ea27d0463be922d21d2e0350d0bcf\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:53:56.204386 kubelet[2282]: I0710 23:53:56.204290 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:53:56.204386 kubelet[2282]: I0710 23:53:56.204304 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:53:56.204386 kubelet[2282]: I0710 23:53:56.204327 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:53:56.204694 kubelet[2282]: I0710 23:53:56.204344 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:53:56.204694 kubelet[2282]: E0710 23:53:56.204338 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Jul 10 23:53:56.204694 kubelet[2282]: I0710 23:53:56.204359 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 23:53:56.204694 kubelet[2282]: I0710 23:53:56.204374 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/656ea27d0463be922d21d2e0350d0bcf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"656ea27d0463be922d21d2e0350d0bcf\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:53:56.204694 kubelet[2282]: I0710 23:53:56.204389 2282 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/656ea27d0463be922d21d2e0350d0bcf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"656ea27d0463be922d21d2e0350d0bcf\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:53:56.364234 kubelet[2282]: I0710 23:53:56.363964 2282 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 23:53:56.364458 kubelet[2282]: E0710 23:53:56.364404 2282 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jul 10 23:53:56.444651 containerd[1514]: time="2025-07-10T23:53:56.444601071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 23:53:56.457762 containerd[1514]: time="2025-07-10T23:53:56.457659151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 23:53:56.460705 containerd[1514]: time="2025-07-10T23:53:56.460599551Z" level=info msg="connecting to shim bbfb966d8dcd8730e9a33a334ab0e036014f2d1c77ca4d60a58344bfb4cabee0" address="unix:///run/containerd/s/a5f3fc66a4b8fde49610abe14e67962164b2c69dd7a027961e584e7b0fad1949" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:53:56.461501 containerd[1514]: time="2025-07-10T23:53:56.461452111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:656ea27d0463be922d21d2e0350d0bcf,Namespace:kube-system,Attempt:0,}" Jul 10 23:53:56.485363 systemd[1]: Started cri-containerd-bbfb966d8dcd8730e9a33a334ab0e036014f2d1c77ca4d60a58344bfb4cabee0.scope - libcontainer container bbfb966d8dcd8730e9a33a334ab0e036014f2d1c77ca4d60a58344bfb4cabee0. Jul 10 23:53:56.485904 containerd[1514]: time="2025-07-10T23:53:56.485741831Z" level=info msg="connecting to shim 637f040ca7e3d5706ddd9636778cac0c3bbc7d842c3a7c62aeaf991bba8e0079" address="unix:///run/containerd/s/ee172d1122882749a5fdada663e3ead07246900eec3dea22fd149b9ef7fc5e9f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:53:56.499547 containerd[1514]: time="2025-07-10T23:53:56.499398911Z" level=info msg="connecting to shim ffda958a8fc7ec24a72cbfc1ac5cfb8f859b6ecae0293a4a0bcc730310ef7d53" address="unix:///run/containerd/s/3843c84eccdd5ab41fec8566faadcba76835ebbe65a7a80b3cb002f66b5eeb0c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:53:56.516474 systemd[1]: Started cri-containerd-637f040ca7e3d5706ddd9636778cac0c3bbc7d842c3a7c62aeaf991bba8e0079.scope - libcontainer container 637f040ca7e3d5706ddd9636778cac0c3bbc7d842c3a7c62aeaf991bba8e0079. Jul 10 23:53:56.521440 systemd[1]: Started cri-containerd-ffda958a8fc7ec24a72cbfc1ac5cfb8f859b6ecae0293a4a0bcc730310ef7d53.scope - libcontainer container ffda958a8fc7ec24a72cbfc1ac5cfb8f859b6ecae0293a4a0bcc730310ef7d53. Jul 10 23:53:56.529511 containerd[1514]: time="2025-07-10T23:53:56.529468911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbfb966d8dcd8730e9a33a334ab0e036014f2d1c77ca4d60a58344bfb4cabee0\"" Jul 10 23:53:56.533549 containerd[1514]: time="2025-07-10T23:53:56.533440511Z" level=info msg="CreateContainer within sandbox \"bbfb966d8dcd8730e9a33a334ab0e036014f2d1c77ca4d60a58344bfb4cabee0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 23:53:56.542822 containerd[1514]: time="2025-07-10T23:53:56.542776271Z" level=info msg="Container 23a4ec80d5a2234f4d1645132106b55a1dcc14a1d2c3e802f74edb43eab98682: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:53:56.598652 containerd[1514]: time="2025-07-10T23:53:56.596632151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"637f040ca7e3d5706ddd9636778cac0c3bbc7d842c3a7c62aeaf991bba8e0079\"" Jul 10 23:53:56.598652 containerd[1514]: time="2025-07-10T23:53:56.597533671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:656ea27d0463be922d21d2e0350d0bcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffda958a8fc7ec24a72cbfc1ac5cfb8f859b6ecae0293a4a0bcc730310ef7d53\"" Jul 10 23:53:56.600138 containerd[1514]: time="2025-07-10T23:53:56.600099551Z" level=info msg="CreateContainer within sandbox \"637f040ca7e3d5706ddd9636778cac0c3bbc7d842c3a7c62aeaf991bba8e0079\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 23:53:56.600215 containerd[1514]: time="2025-07-10T23:53:56.600163151Z" level=info msg="CreateContainer within sandbox \"ffda958a8fc7ec24a72cbfc1ac5cfb8f859b6ecae0293a4a0bcc730310ef7d53\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 23:53:56.600648 containerd[1514]: time="2025-07-10T23:53:56.600617551Z" level=info msg="CreateContainer within sandbox \"bbfb966d8dcd8730e9a33a334ab0e036014f2d1c77ca4d60a58344bfb4cabee0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23a4ec80d5a2234f4d1645132106b55a1dcc14a1d2c3e802f74edb43eab98682\"" Jul 10 23:53:56.601415 containerd[1514]: time="2025-07-10T23:53:56.601369151Z" level=info msg="StartContainer for \"23a4ec80d5a2234f4d1645132106b55a1dcc14a1d2c3e802f74edb43eab98682\"" Jul 10 23:53:56.602617 containerd[1514]: time="2025-07-10T23:53:56.602588551Z" level=info msg="connecting to shim 23a4ec80d5a2234f4d1645132106b55a1dcc14a1d2c3e802f74edb43eab98682" address="unix:///run/containerd/s/a5f3fc66a4b8fde49610abe14e67962164b2c69dd7a027961e584e7b0fad1949" protocol=ttrpc version=3 Jul 10 23:53:56.605619 kubelet[2282]: E0710 23:53:56.605575 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Jul 10 23:53:56.620394 systemd[1]: Started cri-containerd-23a4ec80d5a2234f4d1645132106b55a1dcc14a1d2c3e802f74edb43eab98682.scope - libcontainer container 23a4ec80d5a2234f4d1645132106b55a1dcc14a1d2c3e802f74edb43eab98682. Jul 10 23:53:56.622002 containerd[1514]: time="2025-07-10T23:53:56.621961071Z" level=info msg="Container 98ac686da0ae333fd284553d209de9ac8f126492381eb9ed99c145f916b5f5ff: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:53:56.625702 containerd[1514]: time="2025-07-10T23:53:56.625665231Z" level=info msg="Container 1b5d3ac66eadafa07627febc5988165a6cf121a31200fd2e5ff371a853b139a1: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:53:56.636968 containerd[1514]: time="2025-07-10T23:53:56.636916591Z" level=info msg="CreateContainer within sandbox \"637f040ca7e3d5706ddd9636778cac0c3bbc7d842c3a7c62aeaf991bba8e0079\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"98ac686da0ae333fd284553d209de9ac8f126492381eb9ed99c145f916b5f5ff\"" Jul 10 23:53:56.637414 containerd[1514]: time="2025-07-10T23:53:56.637394351Z" level=info msg="StartContainer for \"98ac686da0ae333fd284553d209de9ac8f126492381eb9ed99c145f916b5f5ff\"" Jul 10 23:53:56.639667 containerd[1514]: time="2025-07-10T23:53:56.639632391Z" level=info msg="CreateContainer within sandbox \"ffda958a8fc7ec24a72cbfc1ac5cfb8f859b6ecae0293a4a0bcc730310ef7d53\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1b5d3ac66eadafa07627febc5988165a6cf121a31200fd2e5ff371a853b139a1\"" Jul 10 23:53:56.639866 containerd[1514]: time="2025-07-10T23:53:56.639727471Z" level=info msg="connecting to shim 98ac686da0ae333fd284553d209de9ac8f126492381eb9ed99c145f916b5f5ff" address="unix:///run/containerd/s/ee172d1122882749a5fdada663e3ead07246900eec3dea22fd149b9ef7fc5e9f" protocol=ttrpc version=3 Jul 10 23:53:56.640165 containerd[1514]: time="2025-07-10T23:53:56.640143151Z" level=info msg="StartContainer for \"1b5d3ac66eadafa07627febc5988165a6cf121a31200fd2e5ff371a853b139a1\"" Jul 10 23:53:56.641324 containerd[1514]: time="2025-07-10T23:53:56.641242071Z" level=info msg="connecting to shim 1b5d3ac66eadafa07627febc5988165a6cf121a31200fd2e5ff371a853b139a1" address="unix:///run/containerd/s/3843c84eccdd5ab41fec8566faadcba76835ebbe65a7a80b3cb002f66b5eeb0c" protocol=ttrpc version=3 Jul 10 23:53:56.659553 containerd[1514]: time="2025-07-10T23:53:56.659503591Z" level=info msg="StartContainer for \"23a4ec80d5a2234f4d1645132106b55a1dcc14a1d2c3e802f74edb43eab98682\" returns successfully" Jul 10 23:53:56.664381 systemd[1]: Started cri-containerd-1b5d3ac66eadafa07627febc5988165a6cf121a31200fd2e5ff371a853b139a1.scope - libcontainer container 1b5d3ac66eadafa07627febc5988165a6cf121a31200fd2e5ff371a853b139a1. Jul 10 23:53:56.665491 systemd[1]: Started cri-containerd-98ac686da0ae333fd284553d209de9ac8f126492381eb9ed99c145f916b5f5ff.scope - libcontainer container 98ac686da0ae333fd284553d209de9ac8f126492381eb9ed99c145f916b5f5ff. Jul 10 23:53:56.706027 containerd[1514]: time="2025-07-10T23:53:56.705931711Z" level=info msg="StartContainer for \"98ac686da0ae333fd284553d209de9ac8f126492381eb9ed99c145f916b5f5ff\" returns successfully" Jul 10 23:53:56.715261 containerd[1514]: time="2025-07-10T23:53:56.715035351Z" level=info msg="StartContainer for \"1b5d3ac66eadafa07627febc5988165a6cf121a31200fd2e5ff371a853b139a1\" returns successfully" Jul 10 23:53:56.770152 kubelet[2282]: I0710 23:53:56.770121 2282 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 23:53:56.771226 kubelet[2282]: E0710 23:53:56.770826 2282 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jul 10 23:53:57.572606 kubelet[2282]: I0710 23:53:57.572311 2282 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 23:53:59.123777 kubelet[2282]: E0710 23:53:59.123744 2282 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 23:53:59.296285 kubelet[2282]: I0710 23:53:59.296240 2282 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 23:53:59.296285 kubelet[2282]: E0710 23:53:59.296286 2282 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 23:53:59.632819 kubelet[2282]: E0710 23:53:59.632740 2282 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 23:53:59.993010 kubelet[2282]: I0710 23:53:59.992917 2282 apiserver.go:52] "Watching apiserver" Jul 10 23:54:00.003994 kubelet[2282]: I0710 23:54:00.003940 2282 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 23:54:01.467513 systemd[1]: Reload requested from client PID 2556 ('systemctl') (unit session-7.scope)... Jul 10 23:54:01.467530 systemd[1]: Reloading... Jul 10 23:54:01.541252 zram_generator::config[2599]: No configuration found. Jul 10 23:54:01.614037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:54:01.714131 systemd[1]: Reloading finished in 246 ms. Jul 10 23:54:01.738958 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:54:01.758265 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:54:01.758579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:54:01.758639 systemd[1]: kubelet.service: Consumed 1.370s CPU time, 128.2M memory peak. Jul 10 23:54:01.760866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:54:01.906319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:54:01.917127 (kubelet)[2641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:54:01.959528 kubelet[2641]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:54:01.959528 kubelet[2641]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 23:54:01.959528 kubelet[2641]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:54:01.959854 kubelet[2641]: I0710 23:54:01.959573 2641 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:54:01.966087 kubelet[2641]: I0710 23:54:01.966038 2641 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 23:54:01.966087 kubelet[2641]: I0710 23:54:01.966071 2641 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:54:01.966544 kubelet[2641]: I0710 23:54:01.966518 2641 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 23:54:01.968627 kubelet[2641]: I0710 23:54:01.968595 2641 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 23:54:01.970590 kubelet[2641]: I0710 23:54:01.970544 2641 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:54:01.975068 kubelet[2641]: I0710 23:54:01.975046 2641 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 23:54:01.977717 kubelet[2641]: I0710 23:54:01.977568 2641 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:54:01.977816 kubelet[2641]: I0710 23:54:01.977725 2641 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 23:54:01.977847 kubelet[2641]: I0710 23:54:01.977822 2641 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:54:01.978083 kubelet[2641]: I0710 23:54:01.977849 2641 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:54:01.978157 kubelet[2641]: I0710 23:54:01.978091 2641 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:54:01.978197 kubelet[2641]: I0710 23:54:01.978156 2641 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 23:54:01.978339 kubelet[2641]: I0710 23:54:01.978323 2641 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:54:01.978469 kubelet[2641]: I0710 23:54:01.978456 2641 kubelet.go:408] "Attempting to sync node with API server" Jul 10 23:54:01.978497 kubelet[2641]: I0710 23:54:01.978477 2641 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:54:01.978497 kubelet[2641]: I0710 23:54:01.978497 2641 kubelet.go:314] "Adding apiserver pod source" Jul 10 23:54:01.978535 kubelet[2641]: I0710 23:54:01.978512 2641 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:54:01.979342 kubelet[2641]: I0710 23:54:01.979317 2641 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 23:54:01.982181 kubelet[2641]: I0710 23:54:01.979786 2641 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 23:54:01.982181 kubelet[2641]: I0710 23:54:01.980289 2641 server.go:1274] "Started kubelet" Jul 10 23:54:01.982439 kubelet[2641]: I0710 23:54:01.982363 2641 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:54:01.982803 kubelet[2641]: I0710 23:54:01.982781 2641 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:54:01.982968 kubelet[2641]: I0710 23:54:01.982943 2641 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:54:01.984691 kubelet[2641]: I0710 23:54:01.984663 2641 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:54:01.990340 kubelet[2641]: I0710 23:54:01.989409 2641 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 23:54:01.990340 kubelet[2641]: I0710 23:54:01.989501 2641 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:54:01.990340 kubelet[2641]: I0710 23:54:01.989567 2641 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 23:54:01.990340 kubelet[2641]: I0710 23:54:01.989731 2641 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:54:01.990340 kubelet[2641]: E0710 23:54:01.989995 2641 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:54:01.995204 kubelet[2641]: I0710 23:54:01.995147 2641 server.go:449] "Adding debug handlers to kubelet server" Jul 10 23:54:02.005200 kubelet[2641]: I0710 23:54:02.004262 2641 factory.go:221] Registration of the containerd container factory successfully Jul 10 23:54:02.005200 kubelet[2641]: I0710 23:54:02.004289 2641 factory.go:221] Registration of the systemd container factory successfully Jul 10 23:54:02.005200 kubelet[2641]: I0710 23:54:02.004377 2641 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:54:02.010466 kubelet[2641]: I0710 23:54:02.010392 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 23:54:02.013368 kubelet[2641]: E0710 23:54:02.011433 2641 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:54:02.013368 kubelet[2641]: I0710 23:54:02.011756 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 23:54:02.013368 kubelet[2641]: I0710 23:54:02.011775 2641 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 23:54:02.013368 kubelet[2641]: I0710 23:54:02.011807 2641 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 23:54:02.013368 kubelet[2641]: E0710 23:54:02.011863 2641 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:54:02.045796 kubelet[2641]: I0710 23:54:02.045767 2641 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 23:54:02.045796 kubelet[2641]: I0710 23:54:02.045789 2641 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 23:54:02.045946 kubelet[2641]: I0710 23:54:02.045812 2641 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:54:02.046546 kubelet[2641]: I0710 23:54:02.046292 2641 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 23:54:02.046546 kubelet[2641]: I0710 23:54:02.046312 2641 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 23:54:02.046546 kubelet[2641]: I0710 23:54:02.046334 2641 policy_none.go:49] "None policy: Start" Jul 10 23:54:02.047076 kubelet[2641]: I0710 23:54:02.047058 2641 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 23:54:02.047076 kubelet[2641]: I0710 23:54:02.047086 2641 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:54:02.047296 kubelet[2641]: I0710 23:54:02.047280 2641 state_mem.go:75] "Updated machine memory state" Jul 10 23:54:02.053134 kubelet[2641]: I0710 23:54:02.052980 2641 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 23:54:02.053279 kubelet[2641]: I0710 23:54:02.053207 2641 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:54:02.053279 kubelet[2641]: I0710 23:54:02.053221 2641 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:54:02.053499 kubelet[2641]: I0710 23:54:02.053473 2641 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:54:02.155279 kubelet[2641]: I0710 23:54:02.155236 2641 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 23:54:02.163697 kubelet[2641]: I0710 23:54:02.163657 2641 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 23:54:02.163811 kubelet[2641]: I0710 23:54:02.163758 2641 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 23:54:02.190838 kubelet[2641]: I0710 23:54:02.190780 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/656ea27d0463be922d21d2e0350d0bcf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"656ea27d0463be922d21d2e0350d0bcf\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:54:02.190838 kubelet[2641]: I0710 23:54:02.190824 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:54:02.190838 kubelet[2641]: I0710 23:54:02.190850 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 23:54:02.191061 kubelet[2641]: I0710 23:54:02.190867 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/656ea27d0463be922d21d2e0350d0bcf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"656ea27d0463be922d21d2e0350d0bcf\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:54:02.191061 kubelet[2641]: I0710 23:54:02.190883 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/656ea27d0463be922d21d2e0350d0bcf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"656ea27d0463be922d21d2e0350d0bcf\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:54:02.191061 kubelet[2641]: I0710 23:54:02.190911 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:54:02.191061 kubelet[2641]: I0710 23:54:02.190925 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:54:02.191061 kubelet[2641]: I0710 23:54:02.190940 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:54:02.191200 kubelet[2641]: I0710 23:54:02.190960 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:54:02.979425 kubelet[2641]: I0710 23:54:02.979361 2641 apiserver.go:52] "Watching apiserver" Jul 10 23:54:02.989670 kubelet[2641]: I0710 23:54:02.989642 2641 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 23:54:03.060415 kubelet[2641]: I0710 23:54:03.060331 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.060314271 podStartE2EDuration="1.060314271s" podCreationTimestamp="2025-07-10 23:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:54:03.047478271 +0000 UTC m=+1.126995601" watchObservedRunningTime="2025-07-10 23:54:03.060314271 +0000 UTC m=+1.139831601" Jul 10 23:54:03.071615 kubelet[2641]: I0710 23:54:03.071333 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.070825191 podStartE2EDuration="1.070825191s" podCreationTimestamp="2025-07-10 23:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:54:03.070645551 +0000 UTC m=+1.150162881" watchObservedRunningTime="2025-07-10 23:54:03.070825191 +0000 UTC m=+1.150342521" Jul 10 23:54:03.071615 kubelet[2641]: I0710 23:54:03.071465 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.071458791 podStartE2EDuration="1.071458791s" podCreationTimestamp="2025-07-10 23:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:54:03.060482111 +0000 UTC m=+1.139999441" watchObservedRunningTime="2025-07-10 23:54:03.071458791 +0000 UTC m=+1.150976161" Jul 10 23:54:05.855103 kubelet[2641]: I0710 23:54:05.855058 2641 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 23:54:05.855510 containerd[1514]: time="2025-07-10T23:54:05.855450150Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 23:54:05.855701 kubelet[2641]: I0710 23:54:05.855644 2641 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 23:54:06.556278 systemd[1]: Created slice kubepods-besteffort-podf68c55ef_c567_4389_adb9_9e9729300239.slice - libcontainer container kubepods-besteffort-podf68c55ef_c567_4389_adb9_9e9729300239.slice. Jul 10 23:54:06.617063 kubelet[2641]: I0710 23:54:06.616866 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f68c55ef-c567-4389-adb9-9e9729300239-kube-proxy\") pod \"kube-proxy-nkr4f\" (UID: \"f68c55ef-c567-4389-adb9-9e9729300239\") " pod="kube-system/kube-proxy-nkr4f" Jul 10 23:54:06.617650 kubelet[2641]: I0710 23:54:06.617369 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f68c55ef-c567-4389-adb9-9e9729300239-xtables-lock\") pod \"kube-proxy-nkr4f\" (UID: \"f68c55ef-c567-4389-adb9-9e9729300239\") " pod="kube-system/kube-proxy-nkr4f" Jul 10 23:54:06.617650 kubelet[2641]: I0710 23:54:06.617415 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6x95\" (UniqueName: \"kubernetes.io/projected/f68c55ef-c567-4389-adb9-9e9729300239-kube-api-access-z6x95\") pod \"kube-proxy-nkr4f\" (UID: \"f68c55ef-c567-4389-adb9-9e9729300239\") " pod="kube-system/kube-proxy-nkr4f" Jul 10 23:54:06.617650 kubelet[2641]: I0710 23:54:06.617445 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f68c55ef-c567-4389-adb9-9e9729300239-lib-modules\") pod \"kube-proxy-nkr4f\" (UID: \"f68c55ef-c567-4389-adb9-9e9729300239\") " pod="kube-system/kube-proxy-nkr4f" Jul 10 23:54:06.869074 containerd[1514]: time="2025-07-10T23:54:06.868156404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nkr4f,Uid:f68c55ef-c567-4389-adb9-9e9729300239,Namespace:kube-system,Attempt:0,}" Jul 10 23:54:06.895889 containerd[1514]: time="2025-07-10T23:54:06.895486329Z" level=info msg="connecting to shim 13350404e574974cfb04ce9a6f2dcf1088583fc44596937f5add484af5348dfc" address="unix:///run/containerd/s/c97caa7bb1f87343003b6638f8f7feb090404e836040d6efa80929cfbe5eea9c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:06.897546 systemd[1]: Created slice kubepods-besteffort-pod36695adc_32d7_4910_96f3_b99477e5572b.slice - libcontainer container kubepods-besteffort-pod36695adc_32d7_4910_96f3_b99477e5572b.slice. Jul 10 23:54:06.919181 kubelet[2641]: I0710 23:54:06.919109 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36695adc-32d7-4910-96f3-b99477e5572b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-gdvsj\" (UID: \"36695adc-32d7-4910-96f3-b99477e5572b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-gdvsj" Jul 10 23:54:06.919181 kubelet[2641]: I0710 23:54:06.919152 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t9zt\" (UniqueName: \"kubernetes.io/projected/36695adc-32d7-4910-96f3-b99477e5572b-kube-api-access-5t9zt\") pod \"tigera-operator-5bf8dfcb4-gdvsj\" (UID: \"36695adc-32d7-4910-96f3-b99477e5572b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-gdvsj" Jul 10 23:54:06.932377 systemd[1]: Started cri-containerd-13350404e574974cfb04ce9a6f2dcf1088583fc44596937f5add484af5348dfc.scope - libcontainer container 13350404e574974cfb04ce9a6f2dcf1088583fc44596937f5add484af5348dfc. Jul 10 23:54:06.953737 containerd[1514]: time="2025-07-10T23:54:06.953661669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nkr4f,Uid:f68c55ef-c567-4389-adb9-9e9729300239,Namespace:kube-system,Attempt:0,} returns sandbox id \"13350404e574974cfb04ce9a6f2dcf1088583fc44596937f5add484af5348dfc\"" Jul 10 23:54:06.956844 containerd[1514]: time="2025-07-10T23:54:06.956812799Z" level=info msg="CreateContainer within sandbox \"13350404e574974cfb04ce9a6f2dcf1088583fc44596937f5add484af5348dfc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 23:54:06.969203 containerd[1514]: time="2025-07-10T23:54:06.969099997Z" level=info msg="Container 4cd6d25835237d08a990d30186887ef8a423134d7d654f0a9d9c6e333557bec0: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:06.975328 containerd[1514]: time="2025-07-10T23:54:06.975286936Z" level=info msg="CreateContainer within sandbox \"13350404e574974cfb04ce9a6f2dcf1088583fc44596937f5add484af5348dfc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4cd6d25835237d08a990d30186887ef8a423134d7d654f0a9d9c6e333557bec0\"" Jul 10 23:54:06.976018 containerd[1514]: time="2025-07-10T23:54:06.975994379Z" level=info msg="StartContainer for \"4cd6d25835237d08a990d30186887ef8a423134d7d654f0a9d9c6e333557bec0\"" Jul 10 23:54:06.977512 containerd[1514]: time="2025-07-10T23:54:06.977487463Z" level=info msg="connecting to shim 4cd6d25835237d08a990d30186887ef8a423134d7d654f0a9d9c6e333557bec0" address="unix:///run/containerd/s/c97caa7bb1f87343003b6638f8f7feb090404e836040d6efa80929cfbe5eea9c" protocol=ttrpc version=3 Jul 10 23:54:07.004345 systemd[1]: Started cri-containerd-4cd6d25835237d08a990d30186887ef8a423134d7d654f0a9d9c6e333557bec0.scope - libcontainer container 4cd6d25835237d08a990d30186887ef8a423134d7d654f0a9d9c6e333557bec0. Jul 10 23:54:07.047930 containerd[1514]: time="2025-07-10T23:54:07.047876673Z" level=info msg="StartContainer for \"4cd6d25835237d08a990d30186887ef8a423134d7d654f0a9d9c6e333557bec0\" returns successfully" Jul 10 23:54:07.204084 containerd[1514]: time="2025-07-10T23:54:07.204042887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-gdvsj,Uid:36695adc-32d7-4910-96f3-b99477e5572b,Namespace:tigera-operator,Attempt:0,}" Jul 10 23:54:07.234941 containerd[1514]: time="2025-07-10T23:54:07.234837657Z" level=info msg="connecting to shim ccbabe95b290ff05001aac2968051810398d00ace6b8f3024b2cb2ad74f62cce" address="unix:///run/containerd/s/a8f585ac1f5637955e4aef416f86e995981f93f23ea9716363d9ec8e8d5c2f9c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:07.257357 systemd[1]: Started cri-containerd-ccbabe95b290ff05001aac2968051810398d00ace6b8f3024b2cb2ad74f62cce.scope - libcontainer container ccbabe95b290ff05001aac2968051810398d00ace6b8f3024b2cb2ad74f62cce. Jul 10 23:54:07.286793 containerd[1514]: time="2025-07-10T23:54:07.286753448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-gdvsj,Uid:36695adc-32d7-4910-96f3-b99477e5572b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ccbabe95b290ff05001aac2968051810398d00ace6b8f3024b2cb2ad74f62cce\"" Jul 10 23:54:07.289194 containerd[1514]: time="2025-07-10T23:54:07.288556813Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 23:54:08.052153 kubelet[2641]: I0710 23:54:08.051213 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nkr4f" podStartSLOduration=2.051196783 podStartE2EDuration="2.051196783s" podCreationTimestamp="2025-07-10 23:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:54:08.050731382 +0000 UTC m=+6.130248712" watchObservedRunningTime="2025-07-10 23:54:08.051196783 +0000 UTC m=+6.130714113" Jul 10 23:54:08.584126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040737486.mount: Deactivated successfully. Jul 10 23:54:08.888136 containerd[1514]: time="2025-07-10T23:54:08.887937586Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:08.889319 containerd[1514]: time="2025-07-10T23:54:08.888391387Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 10 23:54:08.890017 containerd[1514]: time="2025-07-10T23:54:08.889989871Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:08.892015 containerd[1514]: time="2025-07-10T23:54:08.891992797Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:08.892962 containerd[1514]: time="2025-07-10T23:54:08.892610319Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.603997386s" Jul 10 23:54:08.892962 containerd[1514]: time="2025-07-10T23:54:08.892637639Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 10 23:54:08.896530 containerd[1514]: time="2025-07-10T23:54:08.896503809Z" level=info msg="CreateContainer within sandbox \"ccbabe95b290ff05001aac2968051810398d00ace6b8f3024b2cb2ad74f62cce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 23:54:08.904154 containerd[1514]: time="2025-07-10T23:54:08.903596829Z" level=info msg="Container 6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:08.907691 containerd[1514]: time="2025-07-10T23:54:08.907651040Z" level=info msg="CreateContainer within sandbox \"ccbabe95b290ff05001aac2968051810398d00ace6b8f3024b2cb2ad74f62cce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238\"" Jul 10 23:54:08.909194 containerd[1514]: time="2025-07-10T23:54:08.908141081Z" level=info msg="StartContainer for \"6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238\"" Jul 10 23:54:08.909194 containerd[1514]: time="2025-07-10T23:54:08.908965243Z" level=info msg="connecting to shim 6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238" address="unix:///run/containerd/s/a8f585ac1f5637955e4aef416f86e995981f93f23ea9716363d9ec8e8d5c2f9c" protocol=ttrpc version=3 Jul 10 23:54:08.929355 systemd[1]: Started cri-containerd-6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238.scope - libcontainer container 6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238. Jul 10 23:54:08.957718 containerd[1514]: time="2025-07-10T23:54:08.957682496Z" level=info msg="StartContainer for \"6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238\" returns successfully" Jul 10 23:54:09.060196 kubelet[2641]: I0710 23:54:09.060123 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-gdvsj" podStartSLOduration=1.45265409 podStartE2EDuration="3.060109285s" podCreationTimestamp="2025-07-10 23:54:06 +0000 UTC" firstStartedPulling="2025-07-10 23:54:07.288022811 +0000 UTC m=+5.367540141" lastFinishedPulling="2025-07-10 23:54:08.895478006 +0000 UTC m=+6.974995336" observedRunningTime="2025-07-10 23:54:09.059914885 +0000 UTC m=+7.139432215" watchObservedRunningTime="2025-07-10 23:54:09.060109285 +0000 UTC m=+7.139626615" Jul 10 23:54:11.042410 systemd[1]: cri-containerd-6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238.scope: Deactivated successfully. Jul 10 23:54:11.132943 containerd[1514]: time="2025-07-10T23:54:11.132888346Z" level=info msg="received exit event container_id:\"6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238\" id:\"6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238\" pid:2963 exit_status:1 exited_at:{seconds:1752191651 nanos:94790780}" Jul 10 23:54:11.134769 containerd[1514]: time="2025-07-10T23:54:11.133004506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238\" id:\"6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238\" pid:2963 exit_status:1 exited_at:{seconds:1752191651 nanos:94790780}" Jul 10 23:54:11.196225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238-rootfs.mount: Deactivated successfully. Jul 10 23:54:12.059200 kubelet[2641]: I0710 23:54:12.058902 2641 scope.go:117] "RemoveContainer" containerID="6e3644230f0f82052341581ab180590b104788285e983a695be63f3d51ac5238" Jul 10 23:54:12.063123 containerd[1514]: time="2025-07-10T23:54:12.061656225Z" level=info msg="CreateContainer within sandbox \"ccbabe95b290ff05001aac2968051810398d00ace6b8f3024b2cb2ad74f62cce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 10 23:54:12.072986 containerd[1514]: time="2025-07-10T23:54:12.072948369Z" level=info msg="Container 88ed52a10935649cc71bc3cb67af82dd4c28f9c2fe65f2cd297cd32608aa2483: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:12.083702 containerd[1514]: time="2025-07-10T23:54:12.083604032Z" level=info msg="CreateContainer within sandbox \"ccbabe95b290ff05001aac2968051810398d00ace6b8f3024b2cb2ad74f62cce\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"88ed52a10935649cc71bc3cb67af82dd4c28f9c2fe65f2cd297cd32608aa2483\"" Jul 10 23:54:12.090520 containerd[1514]: time="2025-07-10T23:54:12.090468646Z" level=info msg="StartContainer for \"88ed52a10935649cc71bc3cb67af82dd4c28f9c2fe65f2cd297cd32608aa2483\"" Jul 10 23:54:12.093733 containerd[1514]: time="2025-07-10T23:54:12.093636693Z" level=info msg="connecting to shim 88ed52a10935649cc71bc3cb67af82dd4c28f9c2fe65f2cd297cd32608aa2483" address="unix:///run/containerd/s/a8f585ac1f5637955e4aef416f86e995981f93f23ea9716363d9ec8e8d5c2f9c" protocol=ttrpc version=3 Jul 10 23:54:12.156365 systemd[1]: Started cri-containerd-88ed52a10935649cc71bc3cb67af82dd4c28f9c2fe65f2cd297cd32608aa2483.scope - libcontainer container 88ed52a10935649cc71bc3cb67af82dd4c28f9c2fe65f2cd297cd32608aa2483. Jul 10 23:54:12.196251 containerd[1514]: time="2025-07-10T23:54:12.195046786Z" level=info msg="StartContainer for \"88ed52a10935649cc71bc3cb67af82dd4c28f9c2fe65f2cd297cd32608aa2483\" returns successfully" Jul 10 23:54:14.316357 sudo[1735]: pam_unix(sudo:session): session closed for user root Jul 10 23:54:14.327590 sshd[1734]: Connection closed by 10.0.0.1 port 60372 Jul 10 23:54:14.328062 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:14.331211 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Jul 10 23:54:14.331315 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:60372.service: Deactivated successfully. Jul 10 23:54:14.333495 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 23:54:14.333720 systemd[1]: session-7.scope: Consumed 6.538s CPU time, 225.5M memory peak. Jul 10 23:54:14.336557 systemd-logind[1488]: Removed session 7. Jul 10 23:54:16.965498 update_engine[1498]: I20250710 23:54:16.965428 1498 update_attempter.cc:509] Updating boot flags... Jul 10 23:54:20.304397 systemd[1]: Created slice kubepods-besteffort-pod65c73fb9_2176_4b13_a64a_b7eb7012169b.slice - libcontainer container kubepods-besteffort-pod65c73fb9_2176_4b13_a64a_b7eb7012169b.slice. Jul 10 23:54:20.331820 kubelet[2641]: I0710 23:54:20.331773 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65c73fb9-2176-4b13-a64a-b7eb7012169b-typha-certs\") pod \"calico-typha-69555ffc59-thlfs\" (UID: \"65c73fb9-2176-4b13-a64a-b7eb7012169b\") " pod="calico-system/calico-typha-69555ffc59-thlfs" Jul 10 23:54:20.331820 kubelet[2641]: I0710 23:54:20.331821 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srgqr\" (UniqueName: \"kubernetes.io/projected/65c73fb9-2176-4b13-a64a-b7eb7012169b-kube-api-access-srgqr\") pod \"calico-typha-69555ffc59-thlfs\" (UID: \"65c73fb9-2176-4b13-a64a-b7eb7012169b\") " pod="calico-system/calico-typha-69555ffc59-thlfs" Jul 10 23:54:20.332286 kubelet[2641]: I0710 23:54:20.331852 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65c73fb9-2176-4b13-a64a-b7eb7012169b-tigera-ca-bundle\") pod \"calico-typha-69555ffc59-thlfs\" (UID: \"65c73fb9-2176-4b13-a64a-b7eb7012169b\") " pod="calico-system/calico-typha-69555ffc59-thlfs" Jul 10 23:54:20.344311 systemd[1]: Created slice kubepods-besteffort-pod83992846_5b91_4af6_8bd0_e29660c0ec8c.slice - libcontainer container kubepods-besteffort-pod83992846_5b91_4af6_8bd0_e29660c0ec8c.slice. Jul 10 23:54:20.432605 kubelet[2641]: I0710 23:54:20.432557 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-cni-log-dir\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.432809 kubelet[2641]: I0710 23:54:20.432787 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk5cb\" (UniqueName: \"kubernetes.io/projected/83992846-5b91-4af6-8bd0-e29660c0ec8c-kube-api-access-sk5cb\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.432878 kubelet[2641]: I0710 23:54:20.432865 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-cni-net-dir\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.432939 kubelet[2641]: I0710 23:54:20.432928 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-var-run-calico\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.433013 kubelet[2641]: I0710 23:54:20.432999 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-policysync\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.433072 kubelet[2641]: I0710 23:54:20.433062 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-var-lib-calico\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.433149 kubelet[2641]: I0710 23:54:20.433138 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-cni-bin-dir\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.433260 kubelet[2641]: I0710 23:54:20.433245 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-lib-modules\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.433344 kubelet[2641]: I0710 23:54:20.433331 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-flexvol-driver-host\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.433426 kubelet[2641]: I0710 23:54:20.433413 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83992846-5b91-4af6-8bd0-e29660c0ec8c-tigera-ca-bundle\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.433490 kubelet[2641]: I0710 23:54:20.433479 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83992846-5b91-4af6-8bd0-e29660c0ec8c-xtables-lock\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.433571 kubelet[2641]: I0710 23:54:20.433559 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/83992846-5b91-4af6-8bd0-e29660c0ec8c-node-certs\") pod \"calico-node-bhtzd\" (UID: \"83992846-5b91-4af6-8bd0-e29660c0ec8c\") " pod="calico-system/calico-node-bhtzd" Jul 10 23:54:20.537742 kubelet[2641]: E0710 23:54:20.537701 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.537858 kubelet[2641]: W0710 23:54:20.537829 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.537886 kubelet[2641]: E0710 23:54:20.537855 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.539326 kubelet[2641]: E0710 23:54:20.539299 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.539326 kubelet[2641]: W0710 23:54:20.539321 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.540373 kubelet[2641]: E0710 23:54:20.539339 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.543006 kubelet[2641]: E0710 23:54:20.542251 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.543006 kubelet[2641]: W0710 23:54:20.542270 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.543006 kubelet[2641]: E0710 23:54:20.542288 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.543356 kubelet[2641]: E0710 23:54:20.543342 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.543450 kubelet[2641]: W0710 23:54:20.543435 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.543508 kubelet[2641]: E0710 23:54:20.543496 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.549180 kubelet[2641]: E0710 23:54:20.548846 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zv4h7" podUID="4bda0615-d9a8-4ef2-ac3a-8fad441f4e10" Jul 10 23:54:20.549807 kubelet[2641]: E0710 23:54:20.549705 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.549946 kubelet[2641]: W0710 23:54:20.549899 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.550100 kubelet[2641]: E0710 23:54:20.550046 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.561551 kubelet[2641]: E0710 23:54:20.561475 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.561671 kubelet[2641]: W0710 23:54:20.561654 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.561733 kubelet[2641]: E0710 23:54:20.561721 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.603271 kubelet[2641]: E0710 23:54:20.603237 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.603271 kubelet[2641]: W0710 23:54:20.603271 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.603431 kubelet[2641]: E0710 23:54:20.603291 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.603489 kubelet[2641]: E0710 23:54:20.603479 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.603519 kubelet[2641]: W0710 23:54:20.603493 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.603519 kubelet[2641]: E0710 23:54:20.603503 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.603633 kubelet[2641]: E0710 23:54:20.603622 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.603633 kubelet[2641]: W0710 23:54:20.603632 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.603690 kubelet[2641]: E0710 23:54:20.603648 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.603774 kubelet[2641]: E0710 23:54:20.603764 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.603774 kubelet[2641]: W0710 23:54:20.603774 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.603817 kubelet[2641]: E0710 23:54:20.603781 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.603982 kubelet[2641]: E0710 23:54:20.603971 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.603982 kubelet[2641]: W0710 23:54:20.603982 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.604044 kubelet[2641]: E0710 23:54:20.603990 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.604127 kubelet[2641]: E0710 23:54:20.604111 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.604156 kubelet[2641]: W0710 23:54:20.604127 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.604156 kubelet[2641]: E0710 23:54:20.604134 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.604297 kubelet[2641]: E0710 23:54:20.604285 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.604297 kubelet[2641]: W0710 23:54:20.604295 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.604353 kubelet[2641]: E0710 23:54:20.604303 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.604447 kubelet[2641]: E0710 23:54:20.604437 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.604447 kubelet[2641]: W0710 23:54:20.604446 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.604494 kubelet[2641]: E0710 23:54:20.604454 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.604602 kubelet[2641]: E0710 23:54:20.604592 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.604632 kubelet[2641]: W0710 23:54:20.604602 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.604632 kubelet[2641]: E0710 23:54:20.604609 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.604768 kubelet[2641]: E0710 23:54:20.604758 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.604796 kubelet[2641]: W0710 23:54:20.604769 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.604796 kubelet[2641]: E0710 23:54:20.604778 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.604924 kubelet[2641]: E0710 23:54:20.604915 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.604924 kubelet[2641]: W0710 23:54:20.604923 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.604979 kubelet[2641]: E0710 23:54:20.604931 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.605073 kubelet[2641]: E0710 23:54:20.605064 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.605102 kubelet[2641]: W0710 23:54:20.605073 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.605102 kubelet[2641]: E0710 23:54:20.605081 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.605236 kubelet[2641]: E0710 23:54:20.605224 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.605236 kubelet[2641]: W0710 23:54:20.605234 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.605305 kubelet[2641]: E0710 23:54:20.605241 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.605369 kubelet[2641]: E0710 23:54:20.605359 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.605369 kubelet[2641]: W0710 23:54:20.605369 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.605420 kubelet[2641]: E0710 23:54:20.605376 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.605516 kubelet[2641]: E0710 23:54:20.605506 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.605516 kubelet[2641]: W0710 23:54:20.605515 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.605567 kubelet[2641]: E0710 23:54:20.605527 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.605671 kubelet[2641]: E0710 23:54:20.605654 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.605671 kubelet[2641]: W0710 23:54:20.605664 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.605725 kubelet[2641]: E0710 23:54:20.605671 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.605907 kubelet[2641]: E0710 23:54:20.605889 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.605907 kubelet[2641]: W0710 23:54:20.605902 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.605966 kubelet[2641]: E0710 23:54:20.605912 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.606095 kubelet[2641]: E0710 23:54:20.606082 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.606095 kubelet[2641]: W0710 23:54:20.606093 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.606149 kubelet[2641]: E0710 23:54:20.606103 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.606293 kubelet[2641]: E0710 23:54:20.606277 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.606293 kubelet[2641]: W0710 23:54:20.606288 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.606362 kubelet[2641]: E0710 23:54:20.606297 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.606460 kubelet[2641]: E0710 23:54:20.606450 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.606460 kubelet[2641]: W0710 23:54:20.606459 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.606508 kubelet[2641]: E0710 23:54:20.606467 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.616653 containerd[1514]: time="2025-07-10T23:54:20.616610069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69555ffc59-thlfs,Uid:65c73fb9-2176-4b13-a64a-b7eb7012169b,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:20.636379 kubelet[2641]: E0710 23:54:20.636353 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.636379 kubelet[2641]: W0710 23:54:20.636372 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.636515 kubelet[2641]: E0710 23:54:20.636401 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.636515 kubelet[2641]: I0710 23:54:20.636439 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4bda0615-d9a8-4ef2-ac3a-8fad441f4e10-varrun\") pod \"csi-node-driver-zv4h7\" (UID: \"4bda0615-d9a8-4ef2-ac3a-8fad441f4e10\") " pod="calico-system/csi-node-driver-zv4h7" Jul 10 23:54:20.636709 kubelet[2641]: E0710 23:54:20.636695 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.636709 kubelet[2641]: W0710 23:54:20.636708 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.636770 kubelet[2641]: E0710 23:54:20.636722 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.636770 kubelet[2641]: I0710 23:54:20.636739 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4bda0615-d9a8-4ef2-ac3a-8fad441f4e10-socket-dir\") pod \"csi-node-driver-zv4h7\" (UID: \"4bda0615-d9a8-4ef2-ac3a-8fad441f4e10\") " pod="calico-system/csi-node-driver-zv4h7" Jul 10 23:54:20.636893 kubelet[2641]: E0710 23:54:20.636881 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.636893 kubelet[2641]: W0710 23:54:20.636893 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.636947 kubelet[2641]: E0710 23:54:20.636906 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.636947 kubelet[2641]: I0710 23:54:20.636920 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4bda0615-d9a8-4ef2-ac3a-8fad441f4e10-registration-dir\") pod \"csi-node-driver-zv4h7\" (UID: \"4bda0615-d9a8-4ef2-ac3a-8fad441f4e10\") " pod="calico-system/csi-node-driver-zv4h7" Jul 10 23:54:20.637085 kubelet[2641]: E0710 23:54:20.637074 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.637111 kubelet[2641]: W0710 23:54:20.637085 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.637111 kubelet[2641]: E0710 23:54:20.637098 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.637151 kubelet[2641]: I0710 23:54:20.637112 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4tq6\" (UniqueName: \"kubernetes.io/projected/4bda0615-d9a8-4ef2-ac3a-8fad441f4e10-kube-api-access-p4tq6\") pod \"csi-node-driver-zv4h7\" (UID: \"4bda0615-d9a8-4ef2-ac3a-8fad441f4e10\") " pod="calico-system/csi-node-driver-zv4h7" Jul 10 23:54:20.637292 kubelet[2641]: E0710 23:54:20.637281 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.637292 kubelet[2641]: W0710 23:54:20.637292 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.637348 kubelet[2641]: E0710 23:54:20.637305 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.637348 kubelet[2641]: I0710 23:54:20.637320 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bda0615-d9a8-4ef2-ac3a-8fad441f4e10-kubelet-dir\") pod \"csi-node-driver-zv4h7\" (UID: \"4bda0615-d9a8-4ef2-ac3a-8fad441f4e10\") " pod="calico-system/csi-node-driver-zv4h7" Jul 10 23:54:20.637534 kubelet[2641]: E0710 23:54:20.637522 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.637565 kubelet[2641]: W0710 23:54:20.637534 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.637565 kubelet[2641]: E0710 23:54:20.637558 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.637703 kubelet[2641]: E0710 23:54:20.637693 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.637703 kubelet[2641]: W0710 23:54:20.637703 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.637747 kubelet[2641]: E0710 23:54:20.637718 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.637877 kubelet[2641]: E0710 23:54:20.637867 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.637877 kubelet[2641]: W0710 23:54:20.637876 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.637933 kubelet[2641]: E0710 23:54:20.637892 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.638032 kubelet[2641]: E0710 23:54:20.638022 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.638032 kubelet[2641]: W0710 23:54:20.638032 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.638182 kubelet[2641]: E0710 23:54:20.638144 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.638278 kubelet[2641]: E0710 23:54:20.638255 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.638278 kubelet[2641]: W0710 23:54:20.638265 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.638331 kubelet[2641]: E0710 23:54:20.638304 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.638410 kubelet[2641]: E0710 23:54:20.638399 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.638410 kubelet[2641]: W0710 23:54:20.638409 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.638477 kubelet[2641]: E0710 23:54:20.638441 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.638928 kubelet[2641]: E0710 23:54:20.638906 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.638928 kubelet[2641]: W0710 23:54:20.638923 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.639192 kubelet[2641]: E0710 23:54:20.639034 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.639263 kubelet[2641]: E0710 23:54:20.639225 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.639263 kubelet[2641]: W0710 23:54:20.639234 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.639263 kubelet[2641]: E0710 23:54:20.639243 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.639588 kubelet[2641]: E0710 23:54:20.639451 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.639588 kubelet[2641]: W0710 23:54:20.639460 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.639588 kubelet[2641]: E0710 23:54:20.639513 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.639819 kubelet[2641]: E0710 23:54:20.639691 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.639819 kubelet[2641]: W0710 23:54:20.639711 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.639819 kubelet[2641]: E0710 23:54:20.639721 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.649883 containerd[1514]: time="2025-07-10T23:54:20.649812311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhtzd,Uid:83992846-5b91-4af6-8bd0-e29660c0ec8c,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:20.655806 containerd[1514]: time="2025-07-10T23:54:20.655768799Z" level=info msg="connecting to shim 8a5795d0f16c690402d68e7062be5e5712143d2b20960b8db90672551a6e77f4" address="unix:///run/containerd/s/3a7c0d9147e9865ffccef5328ce7ceb017fad96d418c54658369ee67be573857" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:20.668421 containerd[1514]: time="2025-07-10T23:54:20.668100614Z" level=info msg="connecting to shim 7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e" address="unix:///run/containerd/s/dcd27f9898112a87117b559a8c1bfe1d6797c55d2c0f001d8080e9e50626794c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:20.678354 systemd[1]: Started cri-containerd-8a5795d0f16c690402d68e7062be5e5712143d2b20960b8db90672551a6e77f4.scope - libcontainer container 8a5795d0f16c690402d68e7062be5e5712143d2b20960b8db90672551a6e77f4. Jul 10 23:54:20.695336 systemd[1]: Started cri-containerd-7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e.scope - libcontainer container 7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e. Jul 10 23:54:20.728396 containerd[1514]: time="2025-07-10T23:54:20.728270810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69555ffc59-thlfs,Uid:65c73fb9-2176-4b13-a64a-b7eb7012169b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a5795d0f16c690402d68e7062be5e5712143d2b20960b8db90672551a6e77f4\"" Jul 10 23:54:20.733125 containerd[1514]: time="2025-07-10T23:54:20.733045976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 23:54:20.737828 kubelet[2641]: E0710 23:54:20.737808 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.738082 kubelet[2641]: W0710 23:54:20.737992 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.738082 kubelet[2641]: E0710 23:54:20.738018 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.738565 kubelet[2641]: E0710 23:54:20.738457 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.738565 kubelet[2641]: W0710 23:54:20.738487 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.738565 kubelet[2641]: E0710 23:54:20.738506 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.739110 kubelet[2641]: E0710 23:54:20.739004 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.739407 kubelet[2641]: W0710 23:54:20.739195 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.739407 kubelet[2641]: E0710 23:54:20.739224 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.739745 kubelet[2641]: E0710 23:54:20.739715 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.739745 kubelet[2641]: W0710 23:54:20.739730 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.739887 kubelet[2641]: E0710 23:54:20.739830 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.740130 kubelet[2641]: E0710 23:54:20.740115 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.740227 kubelet[2641]: W0710 23:54:20.740203 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.740308 kubelet[2641]: E0710 23:54:20.740296 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.740746 kubelet[2641]: E0710 23:54:20.740687 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.740746 kubelet[2641]: W0710 23:54:20.740700 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.741232 kubelet[2641]: E0710 23:54:20.741034 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.741396 kubelet[2641]: E0710 23:54:20.741328 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.741396 kubelet[2641]: W0710 23:54:20.741342 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.741457 kubelet[2641]: E0710 23:54:20.741395 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.741656 kubelet[2641]: E0710 23:54:20.741643 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.741775 kubelet[2641]: W0710 23:54:20.741761 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.741948 kubelet[2641]: E0710 23:54:20.741933 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.742654 kubelet[2641]: E0710 23:54:20.742639 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.742741 kubelet[2641]: W0710 23:54:20.742729 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.742910 kubelet[2641]: E0710 23:54:20.742896 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.743352 kubelet[2641]: E0710 23:54:20.743334 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.743481 kubelet[2641]: W0710 23:54:20.743447 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.743647 kubelet[2641]: E0710 23:54:20.743624 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.744012 kubelet[2641]: E0710 23:54:20.743998 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.744379 kubelet[2641]: W0710 23:54:20.744310 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.744551 kubelet[2641]: E0710 23:54:20.744536 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.745067 kubelet[2641]: E0710 23:54:20.745010 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.745067 kubelet[2641]: W0710 23:54:20.745025 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.745156 kubelet[2641]: E0710 23:54:20.745066 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.745580 kubelet[2641]: E0710 23:54:20.745566 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.745744 kubelet[2641]: W0710 23:54:20.745648 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.745744 kubelet[2641]: E0710 23:54:20.745680 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.745885 kubelet[2641]: E0710 23:54:20.745874 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.745943 kubelet[2641]: W0710 23:54:20.745932 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.746033 kubelet[2641]: E0710 23:54:20.746010 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.746341 kubelet[2641]: E0710 23:54:20.746296 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.746721 kubelet[2641]: W0710 23:54:20.746594 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.746721 kubelet[2641]: E0710 23:54:20.746689 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.747184 kubelet[2641]: E0710 23:54:20.747141 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.747851 kubelet[2641]: W0710 23:54:20.747275 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.747851 kubelet[2641]: E0710 23:54:20.747323 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.748441 kubelet[2641]: E0710 23:54:20.748412 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.748545 kubelet[2641]: W0710 23:54:20.748531 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.748676 kubelet[2641]: E0710 23:54:20.748631 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.748911 kubelet[2641]: E0710 23:54:20.748884 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.748968 kubelet[2641]: W0710 23:54:20.748898 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.749075 kubelet[2641]: E0710 23:54:20.749015 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.750084 kubelet[2641]: E0710 23:54:20.750067 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.750304 kubelet[2641]: W0710 23:54:20.750164 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.750304 kubelet[2641]: E0710 23:54:20.750230 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.750855 kubelet[2641]: E0710 23:54:20.750810 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.750855 kubelet[2641]: W0710 23:54:20.750827 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.751019 kubelet[2641]: E0710 23:54:20.750966 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.756156 kubelet[2641]: E0710 23:54:20.756131 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.756156 kubelet[2641]: W0710 23:54:20.756150 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.756523 kubelet[2641]: E0710 23:54:20.756337 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.756523 kubelet[2641]: E0710 23:54:20.756345 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.756523 kubelet[2641]: W0710 23:54:20.756398 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.756523 kubelet[2641]: E0710 23:54:20.756438 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.757567 kubelet[2641]: E0710 23:54:20.757551 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.757639 kubelet[2641]: W0710 23:54:20.757626 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.757721 kubelet[2641]: E0710 23:54:20.757700 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.757985 kubelet[2641]: E0710 23:54:20.757971 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.758055 kubelet[2641]: W0710 23:54:20.758043 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.758226 kubelet[2641]: E0710 23:54:20.758194 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.758467 kubelet[2641]: E0710 23:54:20.758419 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.758467 kubelet[2641]: W0710 23:54:20.758433 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.758467 kubelet[2641]: E0710 23:54:20.758445 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.774149 kubelet[2641]: E0710 23:54:20.774119 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:20.774149 kubelet[2641]: W0710 23:54:20.774142 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:20.774300 kubelet[2641]: E0710 23:54:20.774160 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:20.775252 containerd[1514]: time="2025-07-10T23:54:20.775214869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhtzd,Uid:83992846-5b91-4af6-8bd0-e29660c0ec8c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e\"" Jul 10 23:54:21.593107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348892517.mount: Deactivated successfully. Jul 10 23:54:22.953463 containerd[1514]: time="2025-07-10T23:54:22.953417944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:22.954112 containerd[1514]: time="2025-07-10T23:54:22.954082865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 10 23:54:22.955185 containerd[1514]: time="2025-07-10T23:54:22.955017746Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:22.957117 containerd[1514]: time="2025-07-10T23:54:22.957078348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:22.958115 containerd[1514]: time="2025-07-10T23:54:22.958078469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.224991973s" Jul 10 23:54:22.958218 containerd[1514]: time="2025-07-10T23:54:22.958201310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 10 23:54:22.970217 containerd[1514]: time="2025-07-10T23:54:22.967298280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 23:54:22.987847 containerd[1514]: time="2025-07-10T23:54:22.987807662Z" level=info msg="CreateContainer within sandbox \"8a5795d0f16c690402d68e7062be5e5712143d2b20960b8db90672551a6e77f4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 23:54:22.994951 containerd[1514]: time="2025-07-10T23:54:22.994112389Z" level=info msg="Container 7b1658b667e89b5a58ba4028acb58aaac6d89ad3a22445ba5918bfcb323b5351: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:23.003065 containerd[1514]: time="2025-07-10T23:54:23.003001839Z" level=info msg="CreateContainer within sandbox \"8a5795d0f16c690402d68e7062be5e5712143d2b20960b8db90672551a6e77f4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7b1658b667e89b5a58ba4028acb58aaac6d89ad3a22445ba5918bfcb323b5351\"" Jul 10 23:54:23.003696 containerd[1514]: time="2025-07-10T23:54:23.003540880Z" level=info msg="StartContainer for \"7b1658b667e89b5a58ba4028acb58aaac6d89ad3a22445ba5918bfcb323b5351\"" Jul 10 23:54:23.005357 containerd[1514]: time="2025-07-10T23:54:23.005320361Z" level=info msg="connecting to shim 7b1658b667e89b5a58ba4028acb58aaac6d89ad3a22445ba5918bfcb323b5351" address="unix:///run/containerd/s/3a7c0d9147e9865ffccef5328ce7ceb017fad96d418c54658369ee67be573857" protocol=ttrpc version=3 Jul 10 23:54:23.012297 kubelet[2641]: E0710 23:54:23.012162 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zv4h7" podUID="4bda0615-d9a8-4ef2-ac3a-8fad441f4e10" Jul 10 23:54:23.027334 systemd[1]: Started cri-containerd-7b1658b667e89b5a58ba4028acb58aaac6d89ad3a22445ba5918bfcb323b5351.scope - libcontainer container 7b1658b667e89b5a58ba4028acb58aaac6d89ad3a22445ba5918bfcb323b5351. Jul 10 23:54:23.073037 containerd[1514]: time="2025-07-10T23:54:23.072985672Z" level=info msg="StartContainer for \"7b1658b667e89b5a58ba4028acb58aaac6d89ad3a22445ba5918bfcb323b5351\" returns successfully" Jul 10 23:54:23.128190 kubelet[2641]: E0710 23:54:23.127970 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.128190 kubelet[2641]: W0710 23:54:23.128190 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.128353 kubelet[2641]: E0710 23:54:23.128215 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.128605 kubelet[2641]: E0710 23:54:23.128581 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.128605 kubelet[2641]: W0710 23:54:23.128595 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.128605 kubelet[2641]: E0710 23:54:23.128605 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.129274 kubelet[2641]: E0710 23:54:23.129259 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.129274 kubelet[2641]: W0710 23:54:23.129273 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.129347 kubelet[2641]: E0710 23:54:23.129285 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.129675 kubelet[2641]: E0710 23:54:23.129662 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.129715 kubelet[2641]: W0710 23:54:23.129675 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.131151 kubelet[2641]: E0710 23:54:23.129696 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.131316 kubelet[2641]: E0710 23:54:23.131280 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.131316 kubelet[2641]: W0710 23:54:23.131292 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.131316 kubelet[2641]: E0710 23:54:23.131306 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.131972 kubelet[2641]: E0710 23:54:23.131464 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.131972 kubelet[2641]: W0710 23:54:23.131478 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.131972 kubelet[2641]: E0710 23:54:23.131488 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.131972 kubelet[2641]: E0710 23:54:23.131658 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.131972 kubelet[2641]: W0710 23:54:23.131667 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.131972 kubelet[2641]: E0710 23:54:23.131677 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.131972 kubelet[2641]: E0710 23:54:23.131841 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.131972 kubelet[2641]: W0710 23:54:23.131850 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.131972 kubelet[2641]: E0710 23:54:23.131859 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.133444 kubelet[2641]: E0710 23:54:23.132024 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.133444 kubelet[2641]: W0710 23:54:23.132033 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.133444 kubelet[2641]: E0710 23:54:23.132041 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.133444 kubelet[2641]: E0710 23:54:23.132190 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.133444 kubelet[2641]: W0710 23:54:23.132197 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.133444 kubelet[2641]: E0710 23:54:23.132205 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.133444 kubelet[2641]: E0710 23:54:23.132382 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.133444 kubelet[2641]: W0710 23:54:23.132392 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.133444 kubelet[2641]: E0710 23:54:23.132401 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.133444 kubelet[2641]: E0710 23:54:23.132539 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.133655 kubelet[2641]: W0710 23:54:23.132547 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.133655 kubelet[2641]: E0710 23:54:23.132555 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.133655 kubelet[2641]: E0710 23:54:23.132698 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.133655 kubelet[2641]: W0710 23:54:23.132706 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.133655 kubelet[2641]: E0710 23:54:23.132713 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.133655 kubelet[2641]: E0710 23:54:23.132843 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.133655 kubelet[2641]: W0710 23:54:23.132850 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.133655 kubelet[2641]: E0710 23:54:23.132857 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.133655 kubelet[2641]: E0710 23:54:23.132981 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.133655 kubelet[2641]: W0710 23:54:23.132988 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.133839 kubelet[2641]: E0710 23:54:23.132994 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.141033 kubelet[2641]: I0710 23:54:23.140980 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69555ffc59-thlfs" podStartSLOduration=0.906839239 podStartE2EDuration="3.140963782s" podCreationTimestamp="2025-07-10 23:54:20 +0000 UTC" firstStartedPulling="2025-07-10 23:54:20.732061095 +0000 UTC m=+18.811578425" lastFinishedPulling="2025-07-10 23:54:22.966185638 +0000 UTC m=+21.045702968" observedRunningTime="2025-07-10 23:54:23.139795981 +0000 UTC m=+21.219313311" watchObservedRunningTime="2025-07-10 23:54:23.140963782 +0000 UTC m=+21.220481112" Jul 10 23:54:23.156903 kubelet[2641]: E0710 23:54:23.156872 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.156903 kubelet[2641]: W0710 23:54:23.156897 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.157117 kubelet[2641]: E0710 23:54:23.156917 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.157159 kubelet[2641]: E0710 23:54:23.157145 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.157159 kubelet[2641]: W0710 23:54:23.157155 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.157298 kubelet[2641]: E0710 23:54:23.157179 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.157358 kubelet[2641]: E0710 23:54:23.157345 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.157358 kubelet[2641]: W0710 23:54:23.157356 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.157410 kubelet[2641]: E0710 23:54:23.157373 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.157615 kubelet[2641]: E0710 23:54:23.157602 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.157615 kubelet[2641]: W0710 23:54:23.157615 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.157708 kubelet[2641]: E0710 23:54:23.157634 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.157816 kubelet[2641]: E0710 23:54:23.157805 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.157849 kubelet[2641]: W0710 23:54:23.157816 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.157849 kubelet[2641]: E0710 23:54:23.157837 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.158025 kubelet[2641]: E0710 23:54:23.158012 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.158025 kubelet[2641]: W0710 23:54:23.158024 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.158106 kubelet[2641]: E0710 23:54:23.158037 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.158262 kubelet[2641]: E0710 23:54:23.158247 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.158300 kubelet[2641]: W0710 23:54:23.158275 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.158451 kubelet[2641]: E0710 23:54:23.158342 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.158480 kubelet[2641]: E0710 23:54:23.158452 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.158480 kubelet[2641]: W0710 23:54:23.158466 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.158529 kubelet[2641]: E0710 23:54:23.158511 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.158719 kubelet[2641]: E0710 23:54:23.158661 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.158719 kubelet[2641]: W0710 23:54:23.158672 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.158719 kubelet[2641]: E0710 23:54:23.158688 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.158851 kubelet[2641]: E0710 23:54:23.158835 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.158851 kubelet[2641]: W0710 23:54:23.158846 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.158941 kubelet[2641]: E0710 23:54:23.158858 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.158994 kubelet[2641]: E0710 23:54:23.158977 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.158994 kubelet[2641]: W0710 23:54:23.158984 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.159049 kubelet[2641]: E0710 23:54:23.159001 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.159158 kubelet[2641]: E0710 23:54:23.159148 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.159158 kubelet[2641]: W0710 23:54:23.159158 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.159227 kubelet[2641]: E0710 23:54:23.159192 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.159516 kubelet[2641]: E0710 23:54:23.159499 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.159549 kubelet[2641]: W0710 23:54:23.159517 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.159549 kubelet[2641]: E0710 23:54:23.159538 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.159711 kubelet[2641]: E0710 23:54:23.159698 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.159711 kubelet[2641]: W0710 23:54:23.159711 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.159781 kubelet[2641]: E0710 23:54:23.159725 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.159891 kubelet[2641]: E0710 23:54:23.159878 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.159891 kubelet[2641]: W0710 23:54:23.159890 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.159939 kubelet[2641]: E0710 23:54:23.159904 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.160133 kubelet[2641]: E0710 23:54:23.160119 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.160162 kubelet[2641]: W0710 23:54:23.160137 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.160162 kubelet[2641]: E0710 23:54:23.160155 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.160335 kubelet[2641]: E0710 23:54:23.160325 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.160335 kubelet[2641]: W0710 23:54:23.160335 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.160389 kubelet[2641]: E0710 23:54:23.160349 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:23.160493 kubelet[2641]: E0710 23:54:23.160483 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:23.160493 kubelet[2641]: W0710 23:54:23.160493 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:23.160545 kubelet[2641]: E0710 23:54:23.160500 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.126458 kubelet[2641]: I0710 23:54:24.126427 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:54:24.142470 kubelet[2641]: E0710 23:54:24.142444 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.142470 kubelet[2641]: W0710 23:54:24.142465 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.142593 kubelet[2641]: E0710 23:54:24.142485 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.142638 kubelet[2641]: E0710 23:54:24.142625 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.142638 kubelet[2641]: W0710 23:54:24.142636 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.142736 kubelet[2641]: E0710 23:54:24.142645 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.142807 kubelet[2641]: E0710 23:54:24.142795 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.142807 kubelet[2641]: W0710 23:54:24.142805 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.142853 kubelet[2641]: E0710 23:54:24.142814 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.142979 kubelet[2641]: E0710 23:54:24.142968 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.142979 kubelet[2641]: W0710 23:54:24.142978 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.143034 kubelet[2641]: E0710 23:54:24.142995 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.143146 kubelet[2641]: E0710 23:54:24.143136 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.143146 kubelet[2641]: W0710 23:54:24.143146 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.143214 kubelet[2641]: E0710 23:54:24.143154 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.143304 kubelet[2641]: E0710 23:54:24.143293 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.143336 kubelet[2641]: W0710 23:54:24.143305 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.143336 kubelet[2641]: E0710 23:54:24.143313 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.143443 kubelet[2641]: E0710 23:54:24.143433 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.143443 kubelet[2641]: W0710 23:54:24.143443 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.143493 kubelet[2641]: E0710 23:54:24.143451 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.143588 kubelet[2641]: E0710 23:54:24.143578 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.143588 kubelet[2641]: W0710 23:54:24.143587 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.143644 kubelet[2641]: E0710 23:54:24.143595 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.143738 kubelet[2641]: E0710 23:54:24.143727 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.143738 kubelet[2641]: W0710 23:54:24.143737 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.143794 kubelet[2641]: E0710 23:54:24.143745 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.143867 kubelet[2641]: E0710 23:54:24.143857 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.143867 kubelet[2641]: W0710 23:54:24.143867 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.143920 kubelet[2641]: E0710 23:54:24.143874 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.143989 kubelet[2641]: E0710 23:54:24.143980 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.143989 kubelet[2641]: W0710 23:54:24.143988 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.144043 kubelet[2641]: E0710 23:54:24.143996 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.144117 kubelet[2641]: E0710 23:54:24.144108 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.144117 kubelet[2641]: W0710 23:54:24.144117 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.144186 kubelet[2641]: E0710 23:54:24.144124 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.144280 kubelet[2641]: E0710 23:54:24.144268 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.144280 kubelet[2641]: W0710 23:54:24.144278 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.144370 kubelet[2641]: E0710 23:54:24.144286 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.144450 kubelet[2641]: E0710 23:54:24.144427 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.144450 kubelet[2641]: W0710 23:54:24.144450 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.144513 kubelet[2641]: E0710 23:54:24.144459 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.144614 kubelet[2641]: E0710 23:54:24.144603 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.144614 kubelet[2641]: W0710 23:54:24.144613 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.144660 kubelet[2641]: E0710 23:54:24.144628 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.165098 kubelet[2641]: E0710 23:54:24.165016 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.165098 kubelet[2641]: W0710 23:54:24.165036 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.165098 kubelet[2641]: E0710 23:54:24.165052 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.165280 kubelet[2641]: E0710 23:54:24.165265 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.165280 kubelet[2641]: W0710 23:54:24.165278 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.165626 kubelet[2641]: E0710 23:54:24.165294 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.165626 kubelet[2641]: E0710 23:54:24.165477 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.165626 kubelet[2641]: W0710 23:54:24.165490 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.165626 kubelet[2641]: E0710 23:54:24.165509 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.165787 kubelet[2641]: E0710 23:54:24.165774 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.165936 kubelet[2641]: W0710 23:54:24.165834 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.165936 kubelet[2641]: E0710 23:54:24.165855 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.166064 kubelet[2641]: E0710 23:54:24.166050 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.166207 kubelet[2641]: W0710 23:54:24.166105 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.166207 kubelet[2641]: E0710 23:54:24.166125 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.166657 kubelet[2641]: E0710 23:54:24.166424 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.166657 kubelet[2641]: W0710 23:54:24.166437 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.166657 kubelet[2641]: E0710 23:54:24.166455 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.166749 kubelet[2641]: E0710 23:54:24.166667 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.166749 kubelet[2641]: W0710 23:54:24.166680 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.166749 kubelet[2641]: E0710 23:54:24.166696 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.166918 kubelet[2641]: E0710 23:54:24.166889 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.166918 kubelet[2641]: W0710 23:54:24.166902 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.166995 kubelet[2641]: E0710 23:54:24.166951 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.167040 kubelet[2641]: E0710 23:54:24.167028 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.167040 kubelet[2641]: W0710 23:54:24.167038 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.167086 kubelet[2641]: E0710 23:54:24.167060 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.167180 kubelet[2641]: E0710 23:54:24.167156 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.167180 kubelet[2641]: W0710 23:54:24.167177 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.167229 kubelet[2641]: E0710 23:54:24.167197 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.167355 kubelet[2641]: E0710 23:54:24.167346 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.167355 kubelet[2641]: W0710 23:54:24.167355 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.167430 kubelet[2641]: E0710 23:54:24.167367 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.167505 kubelet[2641]: E0710 23:54:24.167494 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.167531 kubelet[2641]: W0710 23:54:24.167505 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.167531 kubelet[2641]: E0710 23:54:24.167516 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.167693 kubelet[2641]: E0710 23:54:24.167681 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.167693 kubelet[2641]: W0710 23:54:24.167692 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.167782 kubelet[2641]: E0710 23:54:24.167704 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.167981 kubelet[2641]: E0710 23:54:24.167967 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.167981 kubelet[2641]: W0710 23:54:24.167978 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.168040 kubelet[2641]: E0710 23:54:24.167992 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.168140 kubelet[2641]: E0710 23:54:24.168127 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.168140 kubelet[2641]: W0710 23:54:24.168137 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.168224 kubelet[2641]: E0710 23:54:24.168145 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.168290 kubelet[2641]: E0710 23:54:24.168279 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.168315 kubelet[2641]: W0710 23:54:24.168290 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.168315 kubelet[2641]: E0710 23:54:24.168307 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.168477 kubelet[2641]: E0710 23:54:24.168465 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.168477 kubelet[2641]: W0710 23:54:24.168475 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.168528 kubelet[2641]: E0710 23:54:24.168483 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.170974 kubelet[2641]: E0710 23:54:24.170953 2641 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:54:24.170974 kubelet[2641]: W0710 23:54:24.170970 2641 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:54:24.171029 kubelet[2641]: E0710 23:54:24.170982 2641 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:54:24.249460 containerd[1514]: time="2025-07-10T23:54:24.249412514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:24.250768 containerd[1514]: time="2025-07-10T23:54:24.250721396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 10 23:54:24.251663 containerd[1514]: time="2025-07-10T23:54:24.251617837Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:24.254275 containerd[1514]: time="2025-07-10T23:54:24.254237679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:24.255339 containerd[1514]: time="2025-07-10T23:54:24.255227120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.28790112s" Jul 10 23:54:24.255339 containerd[1514]: time="2025-07-10T23:54:24.255257560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 10 23:54:24.257289 containerd[1514]: time="2025-07-10T23:54:24.257251522Z" level=info msg="CreateContainer within sandbox \"7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 23:54:24.266063 containerd[1514]: time="2025-07-10T23:54:24.265318530Z" level=info msg="Container 6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:24.281144 containerd[1514]: time="2025-07-10T23:54:24.281097025Z" level=info msg="CreateContainer within sandbox \"7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857\"" Jul 10 23:54:24.282311 containerd[1514]: time="2025-07-10T23:54:24.281999586Z" level=info msg="StartContainer for \"6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857\"" Jul 10 23:54:24.296786 containerd[1514]: time="2025-07-10T23:54:24.296743400Z" level=info msg="connecting to shim 6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857" address="unix:///run/containerd/s/dcd27f9898112a87117b559a8c1bfe1d6797c55d2c0f001d8080e9e50626794c" protocol=ttrpc version=3 Jul 10 23:54:24.326370 systemd[1]: Started cri-containerd-6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857.scope - libcontainer container 6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857. Jul 10 23:54:24.373566 containerd[1514]: time="2025-07-10T23:54:24.373527715Z" level=info msg="StartContainer for \"6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857\" returns successfully" Jul 10 23:54:24.423569 systemd[1]: cri-containerd-6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857.scope: Deactivated successfully. Jul 10 23:54:24.427140 containerd[1514]: time="2025-07-10T23:54:24.427086887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857\" id:\"6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857\" pid:3419 exited_at:{seconds:1752191664 nanos:426523326}" Jul 10 23:54:24.433721 containerd[1514]: time="2025-07-10T23:54:24.433662013Z" level=info msg="received exit event container_id:\"6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857\" id:\"6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857\" pid:3419 exited_at:{seconds:1752191664 nanos:426523326}" Jul 10 23:54:24.452624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fbca3e1fee160e86951ca6a0daa40d02f1243eb29fa1ec5bf86c7a05e318857-rootfs.mount: Deactivated successfully. Jul 10 23:54:25.013227 kubelet[2641]: E0710 23:54:25.012601 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zv4h7" podUID="4bda0615-d9a8-4ef2-ac3a-8fad441f4e10" Jul 10 23:54:25.130679 containerd[1514]: time="2025-07-10T23:54:25.130436682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 23:54:27.013061 kubelet[2641]: E0710 23:54:27.012996 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zv4h7" podUID="4bda0615-d9a8-4ef2-ac3a-8fad441f4e10" Jul 10 23:54:28.067510 containerd[1514]: time="2025-07-10T23:54:28.067435019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:28.067989 containerd[1514]: time="2025-07-10T23:54:28.067958220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 10 23:54:28.068787 containerd[1514]: time="2025-07-10T23:54:28.068753740Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:28.070502 containerd[1514]: time="2025-07-10T23:54:28.070455261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:28.071199 containerd[1514]: time="2025-07-10T23:54:28.070969942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.94049346s" Jul 10 23:54:28.071199 containerd[1514]: time="2025-07-10T23:54:28.071000222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 10 23:54:28.074192 containerd[1514]: time="2025-07-10T23:54:28.073001343Z" level=info msg="CreateContainer within sandbox \"7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 23:54:28.089787 containerd[1514]: time="2025-07-10T23:54:28.088689995Z" level=info msg="Container 68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:28.089941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030412770.mount: Deactivated successfully. Jul 10 23:54:28.097070 containerd[1514]: time="2025-07-10T23:54:28.097026921Z" level=info msg="CreateContainer within sandbox \"7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6\"" Jul 10 23:54:28.097661 containerd[1514]: time="2025-07-10T23:54:28.097463402Z" level=info msg="StartContainer for \"68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6\"" Jul 10 23:54:28.099305 containerd[1514]: time="2025-07-10T23:54:28.099165363Z" level=info msg="connecting to shim 68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6" address="unix:///run/containerd/s/dcd27f9898112a87117b559a8c1bfe1d6797c55d2c0f001d8080e9e50626794c" protocol=ttrpc version=3 Jul 10 23:54:28.121349 systemd[1]: Started cri-containerd-68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6.scope - libcontainer container 68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6. Jul 10 23:54:28.158879 containerd[1514]: time="2025-07-10T23:54:28.158834808Z" level=info msg="StartContainer for \"68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6\" returns successfully" Jul 10 23:54:28.708335 systemd[1]: cri-containerd-68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6.scope: Deactivated successfully. Jul 10 23:54:28.708622 systemd[1]: cri-containerd-68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6.scope: Consumed 499ms CPU time, 178.1M memory peak, 1.9M read from disk, 165.8M written to disk. Jul 10 23:54:28.715282 containerd[1514]: time="2025-07-10T23:54:28.715228625Z" level=info msg="received exit event container_id:\"68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6\" id:\"68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6\" pid:3480 exited_at:{seconds:1752191668 nanos:714973425}" Jul 10 23:54:28.715807 containerd[1514]: time="2025-07-10T23:54:28.715502586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6\" id:\"68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6\" pid:3480 exited_at:{seconds:1752191668 nanos:714973425}" Jul 10 23:54:28.723482 kubelet[2641]: I0710 23:54:28.723430 2641 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 23:54:28.746464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68f832977511756ea6afced41c8c9c7b1e8f6c3ff5a3a2632314c97e72306bd6-rootfs.mount: Deactivated successfully. Jul 10 23:54:28.775064 systemd[1]: Created slice kubepods-besteffort-pod37d160c2_6211_496a_8f08_d88ca9af73d3.slice - libcontainer container kubepods-besteffort-pod37d160c2_6211_496a_8f08_d88ca9af73d3.slice. Jul 10 23:54:28.784058 systemd[1]: Created slice kubepods-burstable-pode8d41fb3_c49c_4d59_9ceb_4117861c87d5.slice - libcontainer container kubepods-burstable-pode8d41fb3_c49c_4d59_9ceb_4117861c87d5.slice. Jul 10 23:54:28.793073 systemd[1]: Created slice kubepods-burstable-podd23c2080_afda_4b5d_acb1_7f2e54e5e5f2.slice - libcontainer container kubepods-burstable-podd23c2080_afda_4b5d_acb1_7f2e54e5e5f2.slice. Jul 10 23:54:28.796796 kubelet[2641]: I0710 23:54:28.796755 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1285be41-48fc-433e-a23a-923ab7fc60f1-whisker-backend-key-pair\") pod \"whisker-875bd8f74-g9jlq\" (UID: \"1285be41-48fc-433e-a23a-923ab7fc60f1\") " pod="calico-system/whisker-875bd8f74-g9jlq" Jul 10 23:54:28.796796 kubelet[2641]: I0710 23:54:28.796798 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1285be41-48fc-433e-a23a-923ab7fc60f1-whisker-ca-bundle\") pod \"whisker-875bd8f74-g9jlq\" (UID: \"1285be41-48fc-433e-a23a-923ab7fc60f1\") " pod="calico-system/whisker-875bd8f74-g9jlq" Jul 10 23:54:28.797089 kubelet[2641]: I0710 23:54:28.796816 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnxjl\" (UniqueName: \"kubernetes.io/projected/1285be41-48fc-433e-a23a-923ab7fc60f1-kube-api-access-fnxjl\") pod \"whisker-875bd8f74-g9jlq\" (UID: \"1285be41-48fc-433e-a23a-923ab7fc60f1\") " pod="calico-system/whisker-875bd8f74-g9jlq" Jul 10 23:54:28.797089 kubelet[2641]: I0710 23:54:28.796837 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqztv\" (UniqueName: \"kubernetes.io/projected/37d160c2-6211-496a-8f08-d88ca9af73d3-kube-api-access-cqztv\") pod \"calico-apiserver-54f656494b-4knwr\" (UID: \"37d160c2-6211-496a-8f08-d88ca9af73d3\") " pod="calico-apiserver/calico-apiserver-54f656494b-4knwr" Jul 10 23:54:28.797089 kubelet[2641]: I0710 23:54:28.796866 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdhj\" (UniqueName: \"kubernetes.io/projected/d23c2080-afda-4b5d-acb1-7f2e54e5e5f2-kube-api-access-gtdhj\") pod \"coredns-7c65d6cfc9-j77jp\" (UID: \"d23c2080-afda-4b5d-acb1-7f2e54e5e5f2\") " pod="kube-system/coredns-7c65d6cfc9-j77jp" Jul 10 23:54:28.797089 kubelet[2641]: I0710 23:54:28.796884 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6w4g\" (UniqueName: \"kubernetes.io/projected/f1c62578-5eea-4dd0-89a8-770515134a8b-kube-api-access-k6w4g\") pod \"calico-apiserver-54f656494b-n4vd4\" (UID: \"f1c62578-5eea-4dd0-89a8-770515134a8b\") " pod="calico-apiserver/calico-apiserver-54f656494b-n4vd4" Jul 10 23:54:28.797089 kubelet[2641]: I0710 23:54:28.796931 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wgf\" (UniqueName: \"kubernetes.io/projected/3f82ae53-4310-4884-ac43-08c9352cdd68-kube-api-access-d9wgf\") pod \"goldmane-58fd7646b9-d9w88\" (UID: \"3f82ae53-4310-4884-ac43-08c9352cdd68\") " pod="calico-system/goldmane-58fd7646b9-d9w88" Jul 10 23:54:28.797228 kubelet[2641]: I0710 23:54:28.796951 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw2xn\" (UniqueName: \"kubernetes.io/projected/e8d41fb3-c49c-4d59-9ceb-4117861c87d5-kube-api-access-rw2xn\") pod \"coredns-7c65d6cfc9-kcz6s\" (UID: \"e8d41fb3-c49c-4d59-9ceb-4117861c87d5\") " pod="kube-system/coredns-7c65d6cfc9-kcz6s" Jul 10 23:54:28.797228 kubelet[2641]: I0710 23:54:28.796969 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8d41fb3-c49c-4d59-9ceb-4117861c87d5-config-volume\") pod \"coredns-7c65d6cfc9-kcz6s\" (UID: \"e8d41fb3-c49c-4d59-9ceb-4117861c87d5\") " pod="kube-system/coredns-7c65d6cfc9-kcz6s" Jul 10 23:54:28.797228 kubelet[2641]: I0710 23:54:28.796992 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3f82ae53-4310-4884-ac43-08c9352cdd68-goldmane-key-pair\") pod \"goldmane-58fd7646b9-d9w88\" (UID: \"3f82ae53-4310-4884-ac43-08c9352cdd68\") " pod="calico-system/goldmane-58fd7646b9-d9w88" Jul 10 23:54:28.797228 kubelet[2641]: I0710 23:54:28.797008 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d23c2080-afda-4b5d-acb1-7f2e54e5e5f2-config-volume\") pod \"coredns-7c65d6cfc9-j77jp\" (UID: \"d23c2080-afda-4b5d-acb1-7f2e54e5e5f2\") " pod="kube-system/coredns-7c65d6cfc9-j77jp" Jul 10 23:54:28.797228 kubelet[2641]: I0710 23:54:28.797026 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e05304b8-2313-4330-b227-1400b69e732c-tigera-ca-bundle\") pod \"calico-kube-controllers-84b8d4f545-qlpwt\" (UID: \"e05304b8-2313-4330-b227-1400b69e732c\") " pod="calico-system/calico-kube-controllers-84b8d4f545-qlpwt" Jul 10 23:54:28.797330 kubelet[2641]: I0710 23:54:28.797046 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f82ae53-4310-4884-ac43-08c9352cdd68-config\") pod \"goldmane-58fd7646b9-d9w88\" (UID: \"3f82ae53-4310-4884-ac43-08c9352cdd68\") " pod="calico-system/goldmane-58fd7646b9-d9w88" Jul 10 23:54:28.797330 kubelet[2641]: I0710 23:54:28.797062 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f82ae53-4310-4884-ac43-08c9352cdd68-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-d9w88\" (UID: \"3f82ae53-4310-4884-ac43-08c9352cdd68\") " pod="calico-system/goldmane-58fd7646b9-d9w88" Jul 10 23:54:28.797330 kubelet[2641]: I0710 23:54:28.797081 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1c62578-5eea-4dd0-89a8-770515134a8b-calico-apiserver-certs\") pod \"calico-apiserver-54f656494b-n4vd4\" (UID: \"f1c62578-5eea-4dd0-89a8-770515134a8b\") " pod="calico-apiserver/calico-apiserver-54f656494b-n4vd4" Jul 10 23:54:28.797330 kubelet[2641]: I0710 23:54:28.797267 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37d160c2-6211-496a-8f08-d88ca9af73d3-calico-apiserver-certs\") pod \"calico-apiserver-54f656494b-4knwr\" (UID: \"37d160c2-6211-496a-8f08-d88ca9af73d3\") " pod="calico-apiserver/calico-apiserver-54f656494b-4knwr" Jul 10 23:54:28.798474 kubelet[2641]: I0710 23:54:28.798426 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkznn\" (UniqueName: \"kubernetes.io/projected/e05304b8-2313-4330-b227-1400b69e732c-kube-api-access-bkznn\") pod \"calico-kube-controllers-84b8d4f545-qlpwt\" (UID: \"e05304b8-2313-4330-b227-1400b69e732c\") " pod="calico-system/calico-kube-controllers-84b8d4f545-qlpwt" Jul 10 23:54:28.801948 systemd[1]: Created slice kubepods-besteffort-podf1c62578_5eea_4dd0_89a8_770515134a8b.slice - libcontainer container kubepods-besteffort-podf1c62578_5eea_4dd0_89a8_770515134a8b.slice. Jul 10 23:54:28.807239 systemd[1]: Created slice kubepods-besteffort-pode05304b8_2313_4330_b227_1400b69e732c.slice - libcontainer container kubepods-besteffort-pode05304b8_2313_4330_b227_1400b69e732c.slice. Jul 10 23:54:28.815388 systemd[1]: Created slice kubepods-besteffort-pod1285be41_48fc_433e_a23a_923ab7fc60f1.slice - libcontainer container kubepods-besteffort-pod1285be41_48fc_433e_a23a_923ab7fc60f1.slice. Jul 10 23:54:28.822795 systemd[1]: Created slice kubepods-besteffort-pod3f82ae53_4310_4884_ac43_08c9352cdd68.slice - libcontainer container kubepods-besteffort-pod3f82ae53_4310_4884_ac43_08c9352cdd68.slice. Jul 10 23:54:29.017985 systemd[1]: Created slice kubepods-besteffort-pod4bda0615_d9a8_4ef2_ac3a_8fad441f4e10.slice - libcontainer container kubepods-besteffort-pod4bda0615_d9a8_4ef2_ac3a_8fad441f4e10.slice. Jul 10 23:54:29.020649 containerd[1514]: time="2025-07-10T23:54:29.020601334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zv4h7,Uid:4bda0615-d9a8-4ef2-ac3a-8fad441f4e10,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:29.091556 containerd[1514]: time="2025-07-10T23:54:29.091514663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f656494b-4knwr,Uid:37d160c2-6211-496a-8f08-d88ca9af73d3,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:54:29.092097 containerd[1514]: time="2025-07-10T23:54:29.091514343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kcz6s,Uid:e8d41fb3-c49c-4d59-9ceb-4117861c87d5,Namespace:kube-system,Attempt:0,}" Jul 10 23:54:29.099555 containerd[1514]: time="2025-07-10T23:54:29.099522029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j77jp,Uid:d23c2080-afda-4b5d-acb1-7f2e54e5e5f2,Namespace:kube-system,Attempt:0,}" Jul 10 23:54:29.107959 containerd[1514]: time="2025-07-10T23:54:29.107155394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f656494b-n4vd4,Uid:f1c62578-5eea-4dd0-89a8-770515134a8b,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:54:29.120915 containerd[1514]: time="2025-07-10T23:54:29.120873484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-875bd8f74-g9jlq,Uid:1285be41-48fc-433e-a23a-923ab7fc60f1,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:29.123446 containerd[1514]: time="2025-07-10T23:54:29.123392966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b8d4f545-qlpwt,Uid:e05304b8-2313-4330-b227-1400b69e732c,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:29.128505 containerd[1514]: time="2025-07-10T23:54:29.128475969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d9w88,Uid:3f82ae53-4310-4884-ac43-08c9352cdd68,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:29.180539 containerd[1514]: time="2025-07-10T23:54:29.180035926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 23:54:29.404601 containerd[1514]: time="2025-07-10T23:54:29.404469884Z" level=error msg="Failed to destroy network for sandbox \"6d9f8931628e8da50a38a6eb03b587f43f115fdad5bb6cf52701f7dbecdf5b7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.408127 containerd[1514]: time="2025-07-10T23:54:29.408075926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f656494b-n4vd4,Uid:f1c62578-5eea-4dd0-89a8-770515134a8b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9f8931628e8da50a38a6eb03b587f43f115fdad5bb6cf52701f7dbecdf5b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.411159 kubelet[2641]: E0710 23:54:29.411085 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9f8931628e8da50a38a6eb03b587f43f115fdad5bb6cf52701f7dbecdf5b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.414215 kubelet[2641]: E0710 23:54:29.414145 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9f8931628e8da50a38a6eb03b587f43f115fdad5bb6cf52701f7dbecdf5b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f656494b-n4vd4" Jul 10 23:54:29.414215 kubelet[2641]: E0710 23:54:29.414216 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9f8931628e8da50a38a6eb03b587f43f115fdad5bb6cf52701f7dbecdf5b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f656494b-n4vd4" Jul 10 23:54:29.414350 kubelet[2641]: E0710 23:54:29.414269 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f656494b-n4vd4_calico-apiserver(f1c62578-5eea-4dd0-89a8-770515134a8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f656494b-n4vd4_calico-apiserver(f1c62578-5eea-4dd0-89a8-770515134a8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d9f8931628e8da50a38a6eb03b587f43f115fdad5bb6cf52701f7dbecdf5b7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f656494b-n4vd4" podUID="f1c62578-5eea-4dd0-89a8-770515134a8b" Jul 10 23:54:29.423376 containerd[1514]: time="2025-07-10T23:54:29.423310577Z" level=error msg="Failed to destroy network for sandbox \"07e4327cb7a32f53889fa90a791115274172d913e0ac76e2d6cbffed057c010d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.425876 containerd[1514]: time="2025-07-10T23:54:29.425819939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f656494b-4knwr,Uid:37d160c2-6211-496a-8f08-d88ca9af73d3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07e4327cb7a32f53889fa90a791115274172d913e0ac76e2d6cbffed057c010d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.426518 kubelet[2641]: E0710 23:54:29.426271 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07e4327cb7a32f53889fa90a791115274172d913e0ac76e2d6cbffed057c010d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.426619 kubelet[2641]: E0710 23:54:29.426546 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07e4327cb7a32f53889fa90a791115274172d913e0ac76e2d6cbffed057c010d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f656494b-4knwr" Jul 10 23:54:29.426619 kubelet[2641]: E0710 23:54:29.426604 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07e4327cb7a32f53889fa90a791115274172d913e0ac76e2d6cbffed057c010d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f656494b-4knwr" Jul 10 23:54:29.426749 kubelet[2641]: E0710 23:54:29.426645 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f656494b-4knwr_calico-apiserver(37d160c2-6211-496a-8f08-d88ca9af73d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f656494b-4knwr_calico-apiserver(37d160c2-6211-496a-8f08-d88ca9af73d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07e4327cb7a32f53889fa90a791115274172d913e0ac76e2d6cbffed057c010d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f656494b-4knwr" podUID="37d160c2-6211-496a-8f08-d88ca9af73d3" Jul 10 23:54:29.430078 containerd[1514]: time="2025-07-10T23:54:29.430031902Z" level=error msg="Failed to destroy network for sandbox \"6861854e9714f1007869bd0538378aa35bfd08ca432bd049051080212f7bb09e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.431662 containerd[1514]: time="2025-07-10T23:54:29.431613183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-875bd8f74-g9jlq,Uid:1285be41-48fc-433e-a23a-923ab7fc60f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6861854e9714f1007869bd0538378aa35bfd08ca432bd049051080212f7bb09e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.431844 kubelet[2641]: E0710 23:54:29.431809 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6861854e9714f1007869bd0538378aa35bfd08ca432bd049051080212f7bb09e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.431888 kubelet[2641]: E0710 23:54:29.431860 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6861854e9714f1007869bd0538378aa35bfd08ca432bd049051080212f7bb09e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-875bd8f74-g9jlq" Jul 10 23:54:29.431888 kubelet[2641]: E0710 23:54:29.431878 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6861854e9714f1007869bd0538378aa35bfd08ca432bd049051080212f7bb09e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-875bd8f74-g9jlq" Jul 10 23:54:29.431950 kubelet[2641]: E0710 23:54:29.431915 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-875bd8f74-g9jlq_calico-system(1285be41-48fc-433e-a23a-923ab7fc60f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-875bd8f74-g9jlq_calico-system(1285be41-48fc-433e-a23a-923ab7fc60f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6861854e9714f1007869bd0538378aa35bfd08ca432bd049051080212f7bb09e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-875bd8f74-g9jlq" podUID="1285be41-48fc-433e-a23a-923ab7fc60f1" Jul 10 23:54:29.433836 containerd[1514]: time="2025-07-10T23:54:29.433421384Z" level=error msg="Failed to destroy network for sandbox \"3a449f01dc50fe3e7fdec55165900f7eb8ab978465519326dd474e59e0fc40e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.435752 containerd[1514]: time="2025-07-10T23:54:29.435715986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kcz6s,Uid:e8d41fb3-c49c-4d59-9ceb-4117861c87d5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a449f01dc50fe3e7fdec55165900f7eb8ab978465519326dd474e59e0fc40e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.436112 kubelet[2641]: E0710 23:54:29.436078 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a449f01dc50fe3e7fdec55165900f7eb8ab978465519326dd474e59e0fc40e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.436159 kubelet[2641]: E0710 23:54:29.436132 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a449f01dc50fe3e7fdec55165900f7eb8ab978465519326dd474e59e0fc40e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kcz6s" Jul 10 23:54:29.436159 kubelet[2641]: E0710 23:54:29.436151 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a449f01dc50fe3e7fdec55165900f7eb8ab978465519326dd474e59e0fc40e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kcz6s" Jul 10 23:54:29.436260 kubelet[2641]: E0710 23:54:29.436235 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kcz6s_kube-system(e8d41fb3-c49c-4d59-9ceb-4117861c87d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kcz6s_kube-system(e8d41fb3-c49c-4d59-9ceb-4117861c87d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a449f01dc50fe3e7fdec55165900f7eb8ab978465519326dd474e59e0fc40e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kcz6s" podUID="e8d41fb3-c49c-4d59-9ceb-4117861c87d5" Jul 10 23:54:29.436961 containerd[1514]: time="2025-07-10T23:54:29.436916586Z" level=error msg="Failed to destroy network for sandbox \"39888583293df54277c2c3a8bb8a9fd1ef8aa5fbdd8b569d14dcad98ececcff5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.439228 containerd[1514]: time="2025-07-10T23:54:29.439191348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j77jp,Uid:d23c2080-afda-4b5d-acb1-7f2e54e5e5f2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39888583293df54277c2c3a8bb8a9fd1ef8aa5fbdd8b569d14dcad98ececcff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.439551 kubelet[2641]: E0710 23:54:29.439379 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39888583293df54277c2c3a8bb8a9fd1ef8aa5fbdd8b569d14dcad98ececcff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.439615 kubelet[2641]: E0710 23:54:29.439552 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39888583293df54277c2c3a8bb8a9fd1ef8aa5fbdd8b569d14dcad98ececcff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j77jp" Jul 10 23:54:29.439615 kubelet[2641]: E0710 23:54:29.439571 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39888583293df54277c2c3a8bb8a9fd1ef8aa5fbdd8b569d14dcad98ececcff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j77jp" Jul 10 23:54:29.439660 kubelet[2641]: E0710 23:54:29.439621 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-j77jp_kube-system(d23c2080-afda-4b5d-acb1-7f2e54e5e5f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-j77jp_kube-system(d23c2080-afda-4b5d-acb1-7f2e54e5e5f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39888583293df54277c2c3a8bb8a9fd1ef8aa5fbdd8b569d14dcad98ececcff5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j77jp" podUID="d23c2080-afda-4b5d-acb1-7f2e54e5e5f2" Jul 10 23:54:29.439767 containerd[1514]: time="2025-07-10T23:54:29.439627228Z" level=error msg="Failed to destroy network for sandbox \"036ea76252d6d78eb217d7a04b49c5dac6359cb164cb421ab298ad705a2f2461\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.441439 containerd[1514]: time="2025-07-10T23:54:29.441324789Z" level=error msg="Failed to destroy network for sandbox \"4996ec63183b61b1667083517ede759e2ce6670b641af8a49a3cc7928dde5581\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.441625 containerd[1514]: time="2025-07-10T23:54:29.441582110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b8d4f545-qlpwt,Uid:e05304b8-2313-4330-b227-1400b69e732c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"036ea76252d6d78eb217d7a04b49c5dac6359cb164cb421ab298ad705a2f2461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.442220 kubelet[2641]: E0710 23:54:29.442160 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036ea76252d6d78eb217d7a04b49c5dac6359cb164cb421ab298ad705a2f2461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.442411 kubelet[2641]: E0710 23:54:29.442319 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036ea76252d6d78eb217d7a04b49c5dac6359cb164cb421ab298ad705a2f2461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84b8d4f545-qlpwt" Jul 10 23:54:29.442474 kubelet[2641]: E0710 23:54:29.442456 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036ea76252d6d78eb217d7a04b49c5dac6359cb164cb421ab298ad705a2f2461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84b8d4f545-qlpwt" Jul 10 23:54:29.442589 kubelet[2641]: E0710 23:54:29.442559 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84b8d4f545-qlpwt_calico-system(e05304b8-2313-4330-b227-1400b69e732c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84b8d4f545-qlpwt_calico-system(e05304b8-2313-4330-b227-1400b69e732c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"036ea76252d6d78eb217d7a04b49c5dac6359cb164cb421ab298ad705a2f2461\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84b8d4f545-qlpwt" podUID="e05304b8-2313-4330-b227-1400b69e732c" Jul 10 23:54:29.443559 containerd[1514]: time="2025-07-10T23:54:29.443467951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zv4h7,Uid:4bda0615-d9a8-4ef2-ac3a-8fad441f4e10,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4996ec63183b61b1667083517ede759e2ce6670b641af8a49a3cc7928dde5581\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.443773 kubelet[2641]: E0710 23:54:29.443748 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4996ec63183b61b1667083517ede759e2ce6670b641af8a49a3cc7928dde5581\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.443830 kubelet[2641]: E0710 23:54:29.443783 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4996ec63183b61b1667083517ede759e2ce6670b641af8a49a3cc7928dde5581\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zv4h7" Jul 10 23:54:29.443884 kubelet[2641]: E0710 23:54:29.443828 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4996ec63183b61b1667083517ede759e2ce6670b641af8a49a3cc7928dde5581\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zv4h7" Jul 10 23:54:29.443884 kubelet[2641]: E0710 23:54:29.443867 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zv4h7_calico-system(4bda0615-d9a8-4ef2-ac3a-8fad441f4e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zv4h7_calico-system(4bda0615-d9a8-4ef2-ac3a-8fad441f4e10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4996ec63183b61b1667083517ede759e2ce6670b641af8a49a3cc7928dde5581\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zv4h7" podUID="4bda0615-d9a8-4ef2-ac3a-8fad441f4e10" Jul 10 23:54:29.447396 containerd[1514]: time="2025-07-10T23:54:29.447351114Z" level=error msg="Failed to destroy network for sandbox \"cdbfb908be4374e8c1b824f7956117da3ca32d126a528df0cc254338db5a3d30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.448291 containerd[1514]: time="2025-07-10T23:54:29.448257994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d9w88,Uid:3f82ae53-4310-4884-ac43-08c9352cdd68,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdbfb908be4374e8c1b824f7956117da3ca32d126a528df0cc254338db5a3d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.448623 kubelet[2641]: E0710 23:54:29.448512 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdbfb908be4374e8c1b824f7956117da3ca32d126a528df0cc254338db5a3d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:54:29.448623 kubelet[2641]: E0710 23:54:29.448579 2641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdbfb908be4374e8c1b824f7956117da3ca32d126a528df0cc254338db5a3d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-d9w88" Jul 10 23:54:29.448832 kubelet[2641]: E0710 23:54:29.448596 2641 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdbfb908be4374e8c1b824f7956117da3ca32d126a528df0cc254338db5a3d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-d9w88" Jul 10 23:54:29.448832 kubelet[2641]: E0710 23:54:29.448756 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-d9w88_calico-system(3f82ae53-4310-4884-ac43-08c9352cdd68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-d9w88_calico-system(3f82ae53-4310-4884-ac43-08c9352cdd68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdbfb908be4374e8c1b824f7956117da3ca32d126a528df0cc254338db5a3d30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-d9w88" podUID="3f82ae53-4310-4884-ac43-08c9352cdd68" Jul 10 23:54:30.081101 systemd[1]: run-netns-cni\x2d916cdfb1\x2d0e3e\x2d576f\x2d3720\x2dfdf4d01f6a12.mount: Deactivated successfully. Jul 10 23:54:30.081211 systemd[1]: run-netns-cni\x2df4c2cab5\x2dbc1b\x2da4f8\x2d1c54\x2d4e1665b890a3.mount: Deactivated successfully. Jul 10 23:54:30.081259 systemd[1]: run-netns-cni\x2dd8873ae3\x2df585\x2d0762\x2d1396\x2db93ba582d829.mount: Deactivated successfully. Jul 10 23:54:30.081310 systemd[1]: run-netns-cni\x2d9da4b52d\x2d1b08\x2df19a\x2d1356\x2daf0120f2c926.mount: Deactivated successfully. Jul 10 23:54:30.081359 systemd[1]: run-netns-cni\x2d6d74bef8\x2da6dc\x2d0026\x2dddf1\x2d77627eb98003.mount: Deactivated successfully. Jul 10 23:54:33.090364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917938575.mount: Deactivated successfully. Jul 10 23:54:33.124147 containerd[1514]: time="2025-07-10T23:54:33.124087148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 10 23:54:33.127778 containerd[1514]: time="2025-07-10T23:54:33.127687390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.943593701s" Jul 10 23:54:33.127778 containerd[1514]: time="2025-07-10T23:54:33.127724030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 10 23:54:33.128481 containerd[1514]: time="2025-07-10T23:54:33.128228750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:33.130345 containerd[1514]: time="2025-07-10T23:54:33.130314911Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:33.131202 containerd[1514]: time="2025-07-10T23:54:33.131153431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:33.138196 containerd[1514]: time="2025-07-10T23:54:33.138028115Z" level=info msg="CreateContainer within sandbox \"7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 23:54:33.144963 containerd[1514]: time="2025-07-10T23:54:33.144926079Z" level=info msg="Container d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:33.162087 containerd[1514]: time="2025-07-10T23:54:33.162034048Z" level=info msg="CreateContainer within sandbox \"7b4ab67b652c32a9a09187a24803eb363e9a3d8a1e46676de6be0b220d98ee2e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779\"" Jul 10 23:54:33.162860 containerd[1514]: time="2025-07-10T23:54:33.162577488Z" level=info msg="StartContainer for \"d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779\"" Jul 10 23:54:33.164077 containerd[1514]: time="2025-07-10T23:54:33.164046649Z" level=info msg="connecting to shim d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779" address="unix:///run/containerd/s/dcd27f9898112a87117b559a8c1bfe1d6797c55d2c0f001d8080e9e50626794c" protocol=ttrpc version=3 Jul 10 23:54:33.183468 systemd[1]: Started cri-containerd-d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779.scope - libcontainer container d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779. Jul 10 23:54:33.250798 containerd[1514]: time="2025-07-10T23:54:33.250685576Z" level=info msg="StartContainer for \"d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779\" returns successfully" Jul 10 23:54:33.463307 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 23:54:33.463402 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 23:54:33.630606 kubelet[2641]: I0710 23:54:33.630561 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1285be41-48fc-433e-a23a-923ab7fc60f1-whisker-backend-key-pair\") pod \"1285be41-48fc-433e-a23a-923ab7fc60f1\" (UID: \"1285be41-48fc-433e-a23a-923ab7fc60f1\") " Jul 10 23:54:33.630606 kubelet[2641]: I0710 23:54:33.630605 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1285be41-48fc-433e-a23a-923ab7fc60f1-whisker-ca-bundle\") pod \"1285be41-48fc-433e-a23a-923ab7fc60f1\" (UID: \"1285be41-48fc-433e-a23a-923ab7fc60f1\") " Jul 10 23:54:33.631036 kubelet[2641]: I0710 23:54:33.630626 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnxjl\" (UniqueName: \"kubernetes.io/projected/1285be41-48fc-433e-a23a-923ab7fc60f1-kube-api-access-fnxjl\") pod \"1285be41-48fc-433e-a23a-923ab7fc60f1\" (UID: \"1285be41-48fc-433e-a23a-923ab7fc60f1\") " Jul 10 23:54:33.641649 kubelet[2641]: I0710 23:54:33.641612 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1285be41-48fc-433e-a23a-923ab7fc60f1-kube-api-access-fnxjl" (OuterVolumeSpecName: "kube-api-access-fnxjl") pod "1285be41-48fc-433e-a23a-923ab7fc60f1" (UID: "1285be41-48fc-433e-a23a-923ab7fc60f1"). InnerVolumeSpecName "kube-api-access-fnxjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 23:54:33.643906 kubelet[2641]: I0710 23:54:33.643875 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1285be41-48fc-433e-a23a-923ab7fc60f1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1285be41-48fc-433e-a23a-923ab7fc60f1" (UID: "1285be41-48fc-433e-a23a-923ab7fc60f1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 23:54:33.652475 kubelet[2641]: I0710 23:54:33.652439 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1285be41-48fc-433e-a23a-923ab7fc60f1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1285be41-48fc-433e-a23a-923ab7fc60f1" (UID: "1285be41-48fc-433e-a23a-923ab7fc60f1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 23:54:33.731449 kubelet[2641]: I0710 23:54:33.731355 2641 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1285be41-48fc-433e-a23a-923ab7fc60f1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 23:54:33.731449 kubelet[2641]: I0710 23:54:33.731395 2641 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1285be41-48fc-433e-a23a-923ab7fc60f1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 23:54:33.731449 kubelet[2641]: I0710 23:54:33.731406 2641 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnxjl\" (UniqueName: \"kubernetes.io/projected/1285be41-48fc-433e-a23a-923ab7fc60f1-kube-api-access-fnxjl\") on node \"localhost\" DevicePath \"\"" Jul 10 23:54:34.026197 systemd[1]: Removed slice kubepods-besteffort-pod1285be41_48fc_433e_a23a_923ab7fc60f1.slice - libcontainer container kubepods-besteffort-pod1285be41_48fc_433e_a23a_923ab7fc60f1.slice. Jul 10 23:54:34.091710 systemd[1]: var-lib-kubelet-pods-1285be41\x2d48fc\x2d433e\x2da23a\x2d923ab7fc60f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfnxjl.mount: Deactivated successfully. Jul 10 23:54:34.091795 systemd[1]: var-lib-kubelet-pods-1285be41\x2d48fc\x2d433e\x2da23a\x2d923ab7fc60f1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 23:54:34.226408 kubelet[2641]: I0710 23:54:34.226321 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bhtzd" podStartSLOduration=1.87447542 podStartE2EDuration="14.226304659s" podCreationTimestamp="2025-07-10 23:54:20 +0000 UTC" firstStartedPulling="2025-07-10 23:54:20.776549071 +0000 UTC m=+18.856066401" lastFinishedPulling="2025-07-10 23:54:33.12837831 +0000 UTC m=+31.207895640" observedRunningTime="2025-07-10 23:54:34.215311853 +0000 UTC m=+32.294829223" watchObservedRunningTime="2025-07-10 23:54:34.226304659 +0000 UTC m=+32.305821949" Jul 10 23:54:34.267579 systemd[1]: Created slice kubepods-besteffort-pod6bd9a1db_a640_408f_baeb_d7474232b580.slice - libcontainer container kubepods-besteffort-pod6bd9a1db_a640_408f_baeb_d7474232b580.slice. Jul 10 23:54:34.334901 kubelet[2641]: I0710 23:54:34.334757 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqb8h\" (UniqueName: \"kubernetes.io/projected/6bd9a1db-a640-408f-baeb-d7474232b580-kube-api-access-mqb8h\") pod \"whisker-7f9978849d-plhpv\" (UID: \"6bd9a1db-a640-408f-baeb-d7474232b580\") " pod="calico-system/whisker-7f9978849d-plhpv" Jul 10 23:54:34.335318 kubelet[2641]: I0710 23:54:34.335272 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd9a1db-a640-408f-baeb-d7474232b580-whisker-ca-bundle\") pod \"whisker-7f9978849d-plhpv\" (UID: \"6bd9a1db-a640-408f-baeb-d7474232b580\") " pod="calico-system/whisker-7f9978849d-plhpv" Jul 10 23:54:34.336516 kubelet[2641]: I0710 23:54:34.336496 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bd9a1db-a640-408f-baeb-d7474232b580-whisker-backend-key-pair\") pod \"whisker-7f9978849d-plhpv\" (UID: \"6bd9a1db-a640-408f-baeb-d7474232b580\") " pod="calico-system/whisker-7f9978849d-plhpv" Jul 10 23:54:34.346107 containerd[1514]: time="2025-07-10T23:54:34.346073680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779\" id:\"b24863e8a468a37c0b95e57413ba0b823286ea7a46ce1ceed632d4fc59e72b63\" pid:3861 exit_status:1 exited_at:{seconds:1752191674 nanos:345779560}" Jul 10 23:54:34.574573 containerd[1514]: time="2025-07-10T23:54:34.574515476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9978849d-plhpv,Uid:6bd9a1db-a640-408f-baeb-d7474232b580,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:34.778288 systemd-networkd[1433]: cali8ab755051cb: Link UP Jul 10 23:54:34.779855 systemd-networkd[1433]: cali8ab755051cb: Gained carrier Jul 10 23:54:34.802973 containerd[1514]: 2025-07-10 23:54:34.601 [INFO][3875] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 23:54:34.802973 containerd[1514]: 2025-07-10 23:54:34.640 [INFO][3875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f9978849d--plhpv-eth0 whisker-7f9978849d- calico-system 6bd9a1db-a640-408f-baeb-d7474232b580 857 0 2025-07-10 23:54:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f9978849d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f9978849d-plhpv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8ab755051cb [] [] }} ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Namespace="calico-system" Pod="whisker-7f9978849d-plhpv" WorkloadEndpoint="localhost-k8s-whisker--7f9978849d--plhpv-" Jul 10 23:54:34.802973 containerd[1514]: 2025-07-10 23:54:34.640 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Namespace="calico-system" Pod="whisker-7f9978849d-plhpv" WorkloadEndpoint="localhost-k8s-whisker--7f9978849d--plhpv-eth0" Jul 10 23:54:34.802973 containerd[1514]: 2025-07-10 23:54:34.722 [INFO][3890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" HandleID="k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Workload="localhost-k8s-whisker--7f9978849d--plhpv-eth0" Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.722 [INFO][3890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" HandleID="k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Workload="localhost-k8s-whisker--7f9978849d--plhpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005b0de0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f9978849d-plhpv", "timestamp":"2025-07-10 23:54:34.722409112 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.722 [INFO][3890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.722 [INFO][3890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.722 [INFO][3890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.734 [INFO][3890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" host="localhost" Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.739 [INFO][3890] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.743 [INFO][3890] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.745 [INFO][3890] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.747 [INFO][3890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:34.803237 containerd[1514]: 2025-07-10 23:54:34.747 [INFO][3890] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" host="localhost" Jul 10 23:54:34.803610 containerd[1514]: 2025-07-10 23:54:34.752 [INFO][3890] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1 Jul 10 23:54:34.803610 containerd[1514]: 2025-07-10 23:54:34.758 [INFO][3890] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" host="localhost" Jul 10 23:54:34.803610 containerd[1514]: 2025-07-10 23:54:34.764 [INFO][3890] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" host="localhost" Jul 10 23:54:34.803610 containerd[1514]: 2025-07-10 23:54:34.764 [INFO][3890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" host="localhost" Jul 10 23:54:34.803610 containerd[1514]: 2025-07-10 23:54:34.764 [INFO][3890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:54:34.803610 containerd[1514]: 2025-07-10 23:54:34.764 [INFO][3890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" HandleID="k8s-pod-network.02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Workload="localhost-k8s-whisker--7f9978849d--plhpv-eth0" Jul 10 23:54:34.803737 containerd[1514]: 2025-07-10 23:54:34.770 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Namespace="calico-system" Pod="whisker-7f9978849d-plhpv" WorkloadEndpoint="localhost-k8s-whisker--7f9978849d--plhpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f9978849d--plhpv-eth0", GenerateName:"whisker-7f9978849d-", Namespace:"calico-system", SelfLink:"", UID:"6bd9a1db-a640-408f-baeb-d7474232b580", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f9978849d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f9978849d-plhpv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ab755051cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:34.803737 containerd[1514]: 2025-07-10 23:54:34.770 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Namespace="calico-system" Pod="whisker-7f9978849d-plhpv" WorkloadEndpoint="localhost-k8s-whisker--7f9978849d--plhpv-eth0" Jul 10 23:54:34.803849 containerd[1514]: 2025-07-10 23:54:34.770 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ab755051cb ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Namespace="calico-system" Pod="whisker-7f9978849d-plhpv" WorkloadEndpoint="localhost-k8s-whisker--7f9978849d--plhpv-eth0" Jul 10 23:54:34.803849 containerd[1514]: 2025-07-10 23:54:34.778 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Namespace="calico-system" Pod="whisker-7f9978849d-plhpv" WorkloadEndpoint="localhost-k8s-whisker--7f9978849d--plhpv-eth0" Jul 10 23:54:34.803886 containerd[1514]: 2025-07-10 23:54:34.782 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Namespace="calico-system" Pod="whisker-7f9978849d-plhpv" WorkloadEndpoint="localhost-k8s-whisker--7f9978849d--plhpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f9978849d--plhpv-eth0", GenerateName:"whisker-7f9978849d-", Namespace:"calico-system", SelfLink:"", UID:"6bd9a1db-a640-408f-baeb-d7474232b580", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f9978849d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1", Pod:"whisker-7f9978849d-plhpv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ab755051cb", MAC:"e6:78:7f:82:41:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:34.803930 containerd[1514]: 2025-07-10 23:54:34.798 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" Namespace="calico-system" Pod="whisker-7f9978849d-plhpv" WorkloadEndpoint="localhost-k8s-whisker--7f9978849d--plhpv-eth0" Jul 10 23:54:34.963190 containerd[1514]: time="2025-07-10T23:54:34.962674754Z" level=info msg="connecting to shim 02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1" address="unix:///run/containerd/s/7105fe01fe1aad648e5f358647df0509987ff266cd6824d1829343b927535d80" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:34.988397 systemd[1]: Started cri-containerd-02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1.scope - libcontainer container 02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1. Jul 10 23:54:34.999871 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:54:35.019716 containerd[1514]: time="2025-07-10T23:54:35.019674622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9978849d-plhpv,Uid:6bd9a1db-a640-408f-baeb-d7474232b580,Namespace:calico-system,Attempt:0,} returns sandbox id \"02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1\"" Jul 10 23:54:35.021401 containerd[1514]: time="2025-07-10T23:54:35.021374063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 23:54:35.270720 containerd[1514]: time="2025-07-10T23:54:35.270677222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779\" id:\"c2fabefc45b64a904a2f3290e3866e30f7c54871cb18bc8b25437643f5208ef9\" pid:4064 exit_status:1 exited_at:{seconds:1752191675 nanos:270325382}" Jul 10 23:54:35.918906 containerd[1514]: time="2025-07-10T23:54:35.918847492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:35.920223 containerd[1514]: time="2025-07-10T23:54:35.920184453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 10 23:54:35.921237 containerd[1514]: time="2025-07-10T23:54:35.921203533Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:35.924136 containerd[1514]: time="2025-07-10T23:54:35.924101814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:35.924773 containerd[1514]: time="2025-07-10T23:54:35.924744415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 903.228712ms" Jul 10 23:54:35.924875 containerd[1514]: time="2025-07-10T23:54:35.924841095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 10 23:54:35.944104 containerd[1514]: time="2025-07-10T23:54:35.944066384Z" level=info msg="CreateContainer within sandbox \"02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 23:54:35.982374 containerd[1514]: time="2025-07-10T23:54:35.981355762Z" level=info msg="Container 7686b8376fa58c91ae8a6ac553a4badd991f10b50c76e7ed372e71c6d24c4644: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:35.992994 containerd[1514]: time="2025-07-10T23:54:35.992955407Z" level=info msg="CreateContainer within sandbox \"02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7686b8376fa58c91ae8a6ac553a4badd991f10b50c76e7ed372e71c6d24c4644\"" Jul 10 23:54:35.994735 containerd[1514]: time="2025-07-10T23:54:35.994401008Z" level=info msg="StartContainer for \"7686b8376fa58c91ae8a6ac553a4badd991f10b50c76e7ed372e71c6d24c4644\"" Jul 10 23:54:35.996817 containerd[1514]: time="2025-07-10T23:54:35.996764449Z" level=info msg="connecting to shim 7686b8376fa58c91ae8a6ac553a4badd991f10b50c76e7ed372e71c6d24c4644" address="unix:///run/containerd/s/7105fe01fe1aad648e5f358647df0509987ff266cd6824d1829343b927535d80" protocol=ttrpc version=3 Jul 10 23:54:36.020059 kubelet[2641]: I0710 23:54:36.020007 2641 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1285be41-48fc-433e-a23a-923ab7fc60f1" path="/var/lib/kubelet/pods/1285be41-48fc-433e-a23a-923ab7fc60f1/volumes" Jul 10 23:54:36.027387 systemd[1]: Started cri-containerd-7686b8376fa58c91ae8a6ac553a4badd991f10b50c76e7ed372e71c6d24c4644.scope - libcontainer container 7686b8376fa58c91ae8a6ac553a4badd991f10b50c76e7ed372e71c6d24c4644. Jul 10 23:54:36.079804 containerd[1514]: time="2025-07-10T23:54:36.079765686Z" level=info msg="StartContainer for \"7686b8376fa58c91ae8a6ac553a4badd991f10b50c76e7ed372e71c6d24c4644\" returns successfully" Jul 10 23:54:36.080837 containerd[1514]: time="2025-07-10T23:54:36.080811967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 23:54:36.271161 containerd[1514]: time="2025-07-10T23:54:36.271117492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779\" id:\"0e57cf88cb12d7d24bbc84b2de917aa67dcb0a276506db76cc22bd616158ef84\" pid:4152 exit_status:1 exited_at:{seconds:1752191676 nanos:270796652}" Jul 10 23:54:36.723373 systemd-networkd[1433]: cali8ab755051cb: Gained IPv6LL Jul 10 23:54:37.497133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282125363.mount: Deactivated successfully. Jul 10 23:54:37.514061 containerd[1514]: time="2025-07-10T23:54:37.514003634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:37.514558 containerd[1514]: time="2025-07-10T23:54:37.514520315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 10 23:54:37.515348 containerd[1514]: time="2025-07-10T23:54:37.515305075Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:37.517375 containerd[1514]: time="2025-07-10T23:54:37.517326676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:37.517995 containerd[1514]: time="2025-07-10T23:54:37.517955516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.437111029s" Jul 10 23:54:37.518042 containerd[1514]: time="2025-07-10T23:54:37.517994396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 10 23:54:37.521452 containerd[1514]: time="2025-07-10T23:54:37.520963677Z" level=info msg="CreateContainer within sandbox \"02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 23:54:37.535297 containerd[1514]: time="2025-07-10T23:54:37.535245243Z" level=info msg="Container ad9bcef81dc674683d176abeecd3b7b9921c2d2f137ee8b80184b59013d3fced: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:37.555090 containerd[1514]: time="2025-07-10T23:54:37.554985691Z" level=info msg="CreateContainer within sandbox \"02b870926e4f0ae707d334cd4652f5a885cd03a0c14b64c613081892f43e71f1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ad9bcef81dc674683d176abeecd3b7b9921c2d2f137ee8b80184b59013d3fced\"" Jul 10 23:54:37.555774 containerd[1514]: time="2025-07-10T23:54:37.555572452Z" level=info msg="StartContainer for \"ad9bcef81dc674683d176abeecd3b7b9921c2d2f137ee8b80184b59013d3fced\"" Jul 10 23:54:37.556841 containerd[1514]: time="2025-07-10T23:54:37.556806572Z" level=info msg="connecting to shim ad9bcef81dc674683d176abeecd3b7b9921c2d2f137ee8b80184b59013d3fced" address="unix:///run/containerd/s/7105fe01fe1aad648e5f358647df0509987ff266cd6824d1829343b927535d80" protocol=ttrpc version=3 Jul 10 23:54:37.590394 systemd[1]: Started cri-containerd-ad9bcef81dc674683d176abeecd3b7b9921c2d2f137ee8b80184b59013d3fced.scope - libcontainer container ad9bcef81dc674683d176abeecd3b7b9921c2d2f137ee8b80184b59013d3fced. Jul 10 23:54:37.648990 containerd[1514]: time="2025-07-10T23:54:37.648936571Z" level=info msg="StartContainer for \"ad9bcef81dc674683d176abeecd3b7b9921c2d2f137ee8b80184b59013d3fced\" returns successfully" Jul 10 23:54:38.226119 kubelet[2641]: I0710 23:54:38.225924 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f9978849d-plhpv" podStartSLOduration=1.7281697139999999 podStartE2EDuration="4.225908327s" podCreationTimestamp="2025-07-10 23:54:34 +0000 UTC" firstStartedPulling="2025-07-10 23:54:35.020994023 +0000 UTC m=+33.100511353" lastFinishedPulling="2025-07-10 23:54:37.518732676 +0000 UTC m=+35.598249966" observedRunningTime="2025-07-10 23:54:38.225010407 +0000 UTC m=+36.304527737" watchObservedRunningTime="2025-07-10 23:54:38.225908327 +0000 UTC m=+36.305425697" Jul 10 23:54:40.013787 containerd[1514]: time="2025-07-10T23:54:40.013701286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kcz6s,Uid:e8d41fb3-c49c-4d59-9ceb-4117861c87d5,Namespace:kube-system,Attempt:0,}" Jul 10 23:54:40.084512 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:50744.service - OpenSSH per-connection server daemon (10.0.0.1:50744). Jul 10 23:54:40.144661 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 50744 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:40.147764 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:40.152911 systemd-logind[1488]: New session 8 of user core. Jul 10 23:54:40.158368 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 23:54:40.210197 systemd-networkd[1433]: cali62e7ede7244: Link UP Jul 10 23:54:40.210904 systemd-networkd[1433]: cali62e7ede7244: Gained carrier Jul 10 23:54:40.231241 containerd[1514]: 2025-07-10 23:54:40.095 [INFO][4302] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 23:54:40.231241 containerd[1514]: 2025-07-10 23:54:40.126 [INFO][4302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0 coredns-7c65d6cfc9- kube-system e8d41fb3-c49c-4d59-9ceb-4117861c87d5 788 0 2025-07-10 23:54:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-kcz6s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali62e7ede7244 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcz6s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kcz6s-" Jul 10 23:54:40.231241 containerd[1514]: 2025-07-10 23:54:40.126 [INFO][4302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcz6s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" Jul 10 23:54:40.231241 containerd[1514]: 2025-07-10 23:54:40.172 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" HandleID="k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Workload="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.172 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" HandleID="k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Workload="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000397e60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-kcz6s", "timestamp":"2025-07-10 23:54:40.17214222 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.172 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.172 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.172 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.182 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" host="localhost" Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.186 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.190 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.191 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.193 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:40.231476 containerd[1514]: 2025-07-10 23:54:40.193 [INFO][4314] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" host="localhost" Jul 10 23:54:40.231820 containerd[1514]: 2025-07-10 23:54:40.195 [INFO][4314] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af Jul 10 23:54:40.231820 containerd[1514]: 2025-07-10 23:54:40.199 [INFO][4314] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" host="localhost" Jul 10 23:54:40.231820 containerd[1514]: 2025-07-10 23:54:40.204 [INFO][4314] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" host="localhost" Jul 10 23:54:40.231820 containerd[1514]: 2025-07-10 23:54:40.204 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" host="localhost" Jul 10 23:54:40.231820 containerd[1514]: 2025-07-10 23:54:40.204 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:54:40.231820 containerd[1514]: 2025-07-10 23:54:40.204 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" HandleID="k8s-pod-network.9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Workload="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" Jul 10 23:54:40.231936 containerd[1514]: 2025-07-10 23:54:40.206 [INFO][4302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcz6s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e8d41fb3-c49c-4d59-9ceb-4117861c87d5", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-kcz6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62e7ede7244", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:40.232004 containerd[1514]: 2025-07-10 23:54:40.207 [INFO][4302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcz6s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" Jul 10 23:54:40.232004 containerd[1514]: 2025-07-10 23:54:40.207 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62e7ede7244 ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcz6s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" Jul 10 23:54:40.232004 containerd[1514]: 2025-07-10 23:54:40.209 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcz6s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" Jul 10 23:54:40.232065 containerd[1514]: 2025-07-10 23:54:40.210 [INFO][4302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcz6s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e8d41fb3-c49c-4d59-9ceb-4117861c87d5", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af", Pod:"coredns-7c65d6cfc9-kcz6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62e7ede7244", MAC:"9e:77:72:bc:a5:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:40.232065 containerd[1514]: 2025-07-10 23:54:40.227 [INFO][4302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcz6s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kcz6s-eth0" Jul 10 23:54:40.258871 containerd[1514]: time="2025-07-10T23:54:40.258815410Z" level=info msg="connecting to shim 9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af" address="unix:///run/containerd/s/4cb1a29c299126269ceb641c35efd04226f8a7886549fd05c6196f94548b048c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:40.286347 systemd[1]: Started cri-containerd-9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af.scope - libcontainer container 9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af. Jul 10 23:54:40.303888 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:54:40.343993 containerd[1514]: time="2025-07-10T23:54:40.343931840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kcz6s,Uid:e8d41fb3-c49c-4d59-9ceb-4117861c87d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af\"" Jul 10 23:54:40.347013 containerd[1514]: time="2025-07-10T23:54:40.346978081Z" level=info msg="CreateContainer within sandbox \"9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:54:40.373479 sshd[4320]: Connection closed by 10.0.0.1 port 50744 Jul 10 23:54:40.373465 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:40.377155 containerd[1514]: time="2025-07-10T23:54:40.376488691Z" level=info msg="Container 63b936e8ef7a8c71192a28a5fd102c45ed2467b4c1dbe41fa7d4e5f45055bb28: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:40.379055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425383089.mount: Deactivated successfully. Jul 10 23:54:40.379917 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:50744.service: Deactivated successfully. Jul 10 23:54:40.382875 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 23:54:40.386519 containerd[1514]: time="2025-07-10T23:54:40.386479375Z" level=info msg="CreateContainer within sandbox \"9c23ea57dd9a120bf44830022e8a4c320e8ebff4c3c4476df04b856b5db6d3af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63b936e8ef7a8c71192a28a5fd102c45ed2467b4c1dbe41fa7d4e5f45055bb28\"" Jul 10 23:54:40.386709 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. Jul 10 23:54:40.387613 containerd[1514]: time="2025-07-10T23:54:40.387250535Z" level=info msg="StartContainer for \"63b936e8ef7a8c71192a28a5fd102c45ed2467b4c1dbe41fa7d4e5f45055bb28\"" Jul 10 23:54:40.389376 systemd-logind[1488]: Removed session 8. Jul 10 23:54:40.390004 containerd[1514]: time="2025-07-10T23:54:40.389978816Z" level=info msg="connecting to shim 63b936e8ef7a8c71192a28a5fd102c45ed2467b4c1dbe41fa7d4e5f45055bb28" address="unix:///run/containerd/s/4cb1a29c299126269ceb641c35efd04226f8a7886549fd05c6196f94548b048c" protocol=ttrpc version=3 Jul 10 23:54:40.411372 systemd[1]: Started cri-containerd-63b936e8ef7a8c71192a28a5fd102c45ed2467b4c1dbe41fa7d4e5f45055bb28.scope - libcontainer container 63b936e8ef7a8c71192a28a5fd102c45ed2467b4c1dbe41fa7d4e5f45055bb28. Jul 10 23:54:40.438246 containerd[1514]: time="2025-07-10T23:54:40.438055592Z" level=info msg="StartContainer for \"63b936e8ef7a8c71192a28a5fd102c45ed2467b4c1dbe41fa7d4e5f45055bb28\" returns successfully" Jul 10 23:54:41.013007 containerd[1514]: time="2025-07-10T23:54:41.012953231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b8d4f545-qlpwt,Uid:e05304b8-2313-4330-b227-1400b69e732c,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:41.013303 containerd[1514]: time="2025-07-10T23:54:41.012971871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f656494b-n4vd4,Uid:f1c62578-5eea-4dd0-89a8-770515134a8b,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:54:41.127454 systemd-networkd[1433]: calid490783df9e: Link UP Jul 10 23:54:41.127968 systemd-networkd[1433]: calid490783df9e: Gained carrier Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.038 [INFO][4464] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.054 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0 calico-apiserver-54f656494b- calico-apiserver f1c62578-5eea-4dd0-89a8-770515134a8b 794 0 2025-07-10 23:54:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f656494b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f656494b-n4vd4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid490783df9e [] [] }} ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-n4vd4" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--n4vd4-" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.054 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-n4vd4" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.081 [INFO][4488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" HandleID="k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Workload="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.081 [INFO][4488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" HandleID="k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Workload="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f656494b-n4vd4", "timestamp":"2025-07-10 23:54:41.081738493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.081 [INFO][4488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.083 [INFO][4488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.086 [INFO][4488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.096 [INFO][4488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.100 [INFO][4488] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.105 [INFO][4488] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.107 [INFO][4488] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.109 [INFO][4488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.109 [INFO][4488] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.110 [INFO][4488] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1 Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.116 [INFO][4488] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.122 [INFO][4488] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.122 [INFO][4488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" host="localhost" Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.122 [INFO][4488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:54:41.141587 containerd[1514]: 2025-07-10 23:54:41.122 [INFO][4488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" HandleID="k8s-pod-network.a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Workload="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" Jul 10 23:54:41.142427 containerd[1514]: 2025-07-10 23:54:41.124 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-n4vd4" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0", GenerateName:"calico-apiserver-54f656494b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1c62578-5eea-4dd0-89a8-770515134a8b", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f656494b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f656494b-n4vd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid490783df9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:41.142427 containerd[1514]: 2025-07-10 23:54:41.124 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-n4vd4" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" Jul 10 23:54:41.142427 containerd[1514]: 2025-07-10 23:54:41.124 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid490783df9e ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-n4vd4" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" Jul 10 23:54:41.142427 containerd[1514]: 2025-07-10 23:54:41.127 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-n4vd4" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" Jul 10 23:54:41.142427 containerd[1514]: 2025-07-10 23:54:41.130 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-n4vd4" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0", GenerateName:"calico-apiserver-54f656494b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1c62578-5eea-4dd0-89a8-770515134a8b", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f656494b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1", Pod:"calico-apiserver-54f656494b-n4vd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid490783df9e", MAC:"7a:b9:14:ae:f9:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:41.142427 containerd[1514]: 2025-07-10 23:54:41.139 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-n4vd4" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--n4vd4-eth0" Jul 10 23:54:41.159705 containerd[1514]: time="2025-07-10T23:54:41.159669319Z" level=info msg="connecting to shim a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1" address="unix:///run/containerd/s/dfb37ef91b96f1bc708a3a5b79e04e1b7302a575b1f3545028544ca5dd867631" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:41.186323 systemd[1]: Started cri-containerd-a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1.scope - libcontainer container a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1. Jul 10 23:54:41.199383 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:54:41.245477 systemd-networkd[1433]: calif1edf1bdac3: Link UP Jul 10 23:54:41.245790 systemd-networkd[1433]: calif1edf1bdac3: Gained carrier Jul 10 23:54:41.253844 kubelet[2641]: I0710 23:54:41.253785 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kcz6s" podStartSLOduration=35.253769189 podStartE2EDuration="35.253769189s" podCreationTimestamp="2025-07-10 23:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:54:41.236300943 +0000 UTC m=+39.315818353" watchObservedRunningTime="2025-07-10 23:54:41.253769189 +0000 UTC m=+39.333286519" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.034 [INFO][4453] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.054 [INFO][4453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0 calico-kube-controllers-84b8d4f545- calico-system e05304b8-2313-4330-b227-1400b69e732c 795 0 2025-07-10 23:54:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84b8d4f545 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84b8d4f545-qlpwt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif1edf1bdac3 [] [] }} ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Namespace="calico-system" Pod="calico-kube-controllers-84b8d4f545-qlpwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.054 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Namespace="calico-system" Pod="calico-kube-controllers-84b8d4f545-qlpwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.084 [INFO][4482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" HandleID="k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Workload="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.084 [INFO][4482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" HandleID="k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Workload="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84b8d4f545-qlpwt", "timestamp":"2025-07-10 23:54:41.084344694 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.084 [INFO][4482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.122 [INFO][4482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.122 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.198 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.205 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.212 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.214 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.221 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.221 [INFO][4482] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.224 [INFO][4482] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700 Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.231 [INFO][4482] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.236 [INFO][4482] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.236 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" host="localhost" Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.236 [INFO][4482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:54:41.271197 containerd[1514]: 2025-07-10 23:54:41.236 [INFO][4482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" HandleID="k8s-pod-network.8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Workload="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" Jul 10 23:54:41.272227 containerd[1514]: 2025-07-10 23:54:41.242 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Namespace="calico-system" Pod="calico-kube-controllers-84b8d4f545-qlpwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0", GenerateName:"calico-kube-controllers-84b8d4f545-", Namespace:"calico-system", SelfLink:"", UID:"e05304b8-2313-4330-b227-1400b69e732c", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b8d4f545", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84b8d4f545-qlpwt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif1edf1bdac3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:41.272227 containerd[1514]: 2025-07-10 23:54:41.242 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Namespace="calico-system" Pod="calico-kube-controllers-84b8d4f545-qlpwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" Jul 10 23:54:41.272227 containerd[1514]: 2025-07-10 23:54:41.242 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1edf1bdac3 ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Namespace="calico-system" Pod="calico-kube-controllers-84b8d4f545-qlpwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" Jul 10 23:54:41.272227 containerd[1514]: 2025-07-10 23:54:41.247 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Namespace="calico-system" Pod="calico-kube-controllers-84b8d4f545-qlpwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" Jul 10 23:54:41.272227 containerd[1514]: 2025-07-10 23:54:41.252 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Namespace="calico-system" Pod="calico-kube-controllers-84b8d4f545-qlpwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0", GenerateName:"calico-kube-controllers-84b8d4f545-", Namespace:"calico-system", SelfLink:"", UID:"e05304b8-2313-4330-b227-1400b69e732c", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b8d4f545", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700", Pod:"calico-kube-controllers-84b8d4f545-qlpwt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif1edf1bdac3", MAC:"3a:af:05:1e:19:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:41.272227 containerd[1514]: 2025-07-10 23:54:41.266 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" Namespace="calico-system" Pod="calico-kube-controllers-84b8d4f545-qlpwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b8d4f545--qlpwt-eth0" Jul 10 23:54:41.275945 containerd[1514]: time="2025-07-10T23:54:41.275890756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f656494b-n4vd4,Uid:f1c62578-5eea-4dd0-89a8-770515134a8b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1\"" Jul 10 23:54:41.277970 containerd[1514]: time="2025-07-10T23:54:41.277880077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 23:54:41.299478 containerd[1514]: time="2025-07-10T23:54:41.299423124Z" level=info msg="connecting to shim 8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700" address="unix:///run/containerd/s/7a7104db10674bbb2ee4ed3ff7ae4485d85b8c6622c3f604de9e9eb36bd92afe" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:41.326342 systemd[1]: Started cri-containerd-8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700.scope - libcontainer container 8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700. Jul 10 23:54:41.343028 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:54:41.363258 containerd[1514]: time="2025-07-10T23:54:41.363212745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b8d4f545-qlpwt,Uid:e05304b8-2313-4330-b227-1400b69e732c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700\"" Jul 10 23:54:41.669234 kubelet[2641]: I0710 23:54:41.669104 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:54:41.971289 systemd-networkd[1433]: cali62e7ede7244: Gained IPv6LL Jul 10 23:54:42.466004 systemd-networkd[1433]: vxlan.calico: Link UP Jul 10 23:54:42.466013 systemd-networkd[1433]: vxlan.calico: Gained carrier Jul 10 23:54:42.484273 systemd-networkd[1433]: calid490783df9e: Gained IPv6LL Jul 10 23:54:43.013202 containerd[1514]: time="2025-07-10T23:54:43.013124579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j77jp,Uid:d23c2080-afda-4b5d-acb1-7f2e54e5e5f2,Namespace:kube-system,Attempt:0,}" Jul 10 23:54:43.013793 containerd[1514]: time="2025-07-10T23:54:43.013698939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zv4h7,Uid:4bda0615-d9a8-4ef2-ac3a-8fad441f4e10,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:43.050214 containerd[1514]: time="2025-07-10T23:54:43.049696469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:43.051218 containerd[1514]: time="2025-07-10T23:54:43.051131670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 10 23:54:43.052862 containerd[1514]: time="2025-07-10T23:54:43.052827910Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:43.056002 containerd[1514]: time="2025-07-10T23:54:43.055531751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:43.056908 containerd[1514]: time="2025-07-10T23:54:43.056879991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.778970554s" Jul 10 23:54:43.056975 containerd[1514]: time="2025-07-10T23:54:43.056912591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 23:54:43.058123 containerd[1514]: time="2025-07-10T23:54:43.058070552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 23:54:43.060301 containerd[1514]: time="2025-07-10T23:54:43.060237112Z" level=info msg="CreateContainer within sandbox \"a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 23:54:43.209003 systemd-networkd[1433]: cali0eca79bddeb: Link UP Jul 10 23:54:43.209389 systemd-networkd[1433]: cali0eca79bddeb: Gained carrier Jul 10 23:54:43.213235 containerd[1514]: time="2025-07-10T23:54:43.211949075Z" level=info msg="Container 97bd4f292d09154e2aeeebf76f7bb7f91b47266de0568fb3942835d2543c0761: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:43.223183 containerd[1514]: time="2025-07-10T23:54:43.223118519Z" level=info msg="CreateContainer within sandbox \"a13257fc901d36ad89b996e12336e491b057d7874b5fae2d6d6f3349169e07a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"97bd4f292d09154e2aeeebf76f7bb7f91b47266de0568fb3942835d2543c0761\"" Jul 10 23:54:43.224371 containerd[1514]: time="2025-07-10T23:54:43.224037439Z" level=info msg="StartContainer for \"97bd4f292d09154e2aeeebf76f7bb7f91b47266de0568fb3942835d2543c0761\"" Jul 10 23:54:43.227486 containerd[1514]: time="2025-07-10T23:54:43.227149440Z" level=info msg="connecting to shim 97bd4f292d09154e2aeeebf76f7bb7f91b47266de0568fb3942835d2543c0761" address="unix:///run/containerd/s/dfb37ef91b96f1bc708a3a5b79e04e1b7302a575b1f3545028544ca5dd867631" protocol=ttrpc version=3 Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.086 [INFO][4774] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0 coredns-7c65d6cfc9- kube-system d23c2080-afda-4b5d-acb1-7f2e54e5e5f2 793 0 2025-07-10 23:54:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-j77jp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0eca79bddeb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j77jp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j77jp-" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.087 [INFO][4774] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j77jp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.130 [INFO][4814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" HandleID="k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Workload="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.130 [INFO][4814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" HandleID="k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Workload="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-j77jp", "timestamp":"2025-07-10 23:54:43.130557132 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.130 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.130 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.130 [INFO][4814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.141 [INFO][4814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.152 [INFO][4814] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.162 [INFO][4814] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.165 [INFO][4814] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.169 [INFO][4814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.169 [INFO][4814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.172 [INFO][4814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4 Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.187 [INFO][4814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.200 [INFO][4814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.201 [INFO][4814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" host="localhost" Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.201 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:54:43.231390 containerd[1514]: 2025-07-10 23:54:43.201 [INFO][4814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" HandleID="k8s-pod-network.9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Workload="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" Jul 10 23:54:43.231859 containerd[1514]: 2025-07-10 23:54:43.204 [INFO][4774] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j77jp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d23c2080-afda-4b5d-acb1-7f2e54e5e5f2", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-j77jp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0eca79bddeb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:43.231859 containerd[1514]: 2025-07-10 23:54:43.204 [INFO][4774] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j77jp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" Jul 10 23:54:43.231859 containerd[1514]: 2025-07-10 23:54:43.205 [INFO][4774] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0eca79bddeb ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j77jp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" Jul 10 23:54:43.231859 containerd[1514]: 2025-07-10 23:54:43.209 [INFO][4774] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j77jp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" Jul 10 23:54:43.231859 containerd[1514]: 2025-07-10 23:54:43.211 [INFO][4774] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j77jp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d23c2080-afda-4b5d-acb1-7f2e54e5e5f2", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4", Pod:"coredns-7c65d6cfc9-j77jp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0eca79bddeb", MAC:"22:6f:d2:9a:e3:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:43.231859 containerd[1514]: 2025-07-10 23:54:43.226 [INFO][4774] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j77jp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j77jp-eth0" Jul 10 23:54:43.250388 systemd[1]: Started cri-containerd-97bd4f292d09154e2aeeebf76f7bb7f91b47266de0568fb3942835d2543c0761.scope - libcontainer container 97bd4f292d09154e2aeeebf76f7bb7f91b47266de0568fb3942835d2543c0761. Jul 10 23:54:43.261397 containerd[1514]: time="2025-07-10T23:54:43.261343010Z" level=info msg="connecting to shim 9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4" address="unix:///run/containerd/s/74c3712f430cfe9e0f0cdd6ed85f5e07e3801406aeb62fbb6c360792b479cddc" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:43.291400 systemd[1]: Started cri-containerd-9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4.scope - libcontainer container 9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4. Jul 10 23:54:43.308807 systemd-networkd[1433]: calibb291fe941a: Link UP Jul 10 23:54:43.309779 systemd-networkd[1433]: calibb291fe941a: Gained carrier Jul 10 23:54:43.316283 systemd-networkd[1433]: calif1edf1bdac3: Gained IPv6LL Jul 10 23:54:43.323289 containerd[1514]: time="2025-07-10T23:54:43.323236987Z" level=info msg="StartContainer for \"97bd4f292d09154e2aeeebf76f7bb7f91b47266de0568fb3942835d2543c0761\" returns successfully" Jul 10 23:54:43.324554 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.076 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zv4h7-eth0 csi-node-driver- calico-system 4bda0615-d9a8-4ef2-ac3a-8fad441f4e10 669 0 2025-07-10 23:54:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zv4h7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibb291fe941a [] [] }} ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Namespace="calico-system" Pod="csi-node-driver-zv4h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zv4h7-" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.076 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Namespace="calico-system" Pod="csi-node-driver-zv4h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zv4h7-eth0" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.140 [INFO][4806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" HandleID="k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Workload="localhost-k8s-csi--node--driver--zv4h7-eth0" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.140 [INFO][4806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" HandleID="k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Workload="localhost-k8s-csi--node--driver--zv4h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zv4h7", "timestamp":"2025-07-10 23:54:43.140745495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.140 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.201 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.201 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.246 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.253 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.265 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.268 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.276 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.277 [INFO][4806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.279 [INFO][4806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4 Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.287 [INFO][4806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.298 [INFO][4806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.298 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" host="localhost" Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.298 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:54:43.330698 containerd[1514]: 2025-07-10 23:54:43.298 [INFO][4806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" HandleID="k8s-pod-network.9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Workload="localhost-k8s-csi--node--driver--zv4h7-eth0" Jul 10 23:54:43.331589 containerd[1514]: 2025-07-10 23:54:43.303 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Namespace="calico-system" Pod="csi-node-driver-zv4h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zv4h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zv4h7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4bda0615-d9a8-4ef2-ac3a-8fad441f4e10", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zv4h7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibb291fe941a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:43.331589 containerd[1514]: 2025-07-10 23:54:43.303 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Namespace="calico-system" Pod="csi-node-driver-zv4h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zv4h7-eth0" Jul 10 23:54:43.331589 containerd[1514]: 2025-07-10 23:54:43.303 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb291fe941a ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Namespace="calico-system" Pod="csi-node-driver-zv4h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zv4h7-eth0" Jul 10 23:54:43.331589 containerd[1514]: 2025-07-10 23:54:43.308 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Namespace="calico-system" Pod="csi-node-driver-zv4h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zv4h7-eth0" Jul 10 23:54:43.331589 containerd[1514]: 2025-07-10 23:54:43.311 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Namespace="calico-system" Pod="csi-node-driver-zv4h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zv4h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zv4h7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4bda0615-d9a8-4ef2-ac3a-8fad441f4e10", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4", Pod:"csi-node-driver-zv4h7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibb291fe941a", MAC:"ea:ec:42:e2:20:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:43.331589 containerd[1514]: 2025-07-10 23:54:43.326 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" Namespace="calico-system" Pod="csi-node-driver-zv4h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zv4h7-eth0" Jul 10 23:54:43.361028 containerd[1514]: time="2025-07-10T23:54:43.360974598Z" level=info msg="connecting to shim 9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4" address="unix:///run/containerd/s/1287fb2f0c35abdf22ca28dfb16054ef77908266cca2277f978735c35f5d52cb" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:43.371751 containerd[1514]: time="2025-07-10T23:54:43.371700721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j77jp,Uid:d23c2080-afda-4b5d-acb1-7f2e54e5e5f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4\"" Jul 10 23:54:43.379153 containerd[1514]: time="2025-07-10T23:54:43.378653603Z" level=info msg="CreateContainer within sandbox \"9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:54:43.390042 containerd[1514]: time="2025-07-10T23:54:43.389986406Z" level=info msg="Container 2430099fbd764f20231bdfca9c24538f0d9d3ba79431181f7b1559932b162c49: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:43.395394 systemd[1]: Started cri-containerd-9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4.scope - libcontainer container 9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4. Jul 10 23:54:43.398053 containerd[1514]: time="2025-07-10T23:54:43.398007049Z" level=info msg="CreateContainer within sandbox \"9de13dddc7563acc480aeb7b1eba48ab479ad35dc16363bb99d5b3c8c577cdc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2430099fbd764f20231bdfca9c24538f0d9d3ba79431181f7b1559932b162c49\"" Jul 10 23:54:43.400978 containerd[1514]: time="2025-07-10T23:54:43.400247169Z" level=info msg="StartContainer for \"2430099fbd764f20231bdfca9c24538f0d9d3ba79431181f7b1559932b162c49\"" Jul 10 23:54:43.402667 containerd[1514]: time="2025-07-10T23:54:43.402628130Z" level=info msg="connecting to shim 2430099fbd764f20231bdfca9c24538f0d9d3ba79431181f7b1559932b162c49" address="unix:///run/containerd/s/74c3712f430cfe9e0f0cdd6ed85f5e07e3801406aeb62fbb6c360792b479cddc" protocol=ttrpc version=3 Jul 10 23:54:43.416970 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:54:43.435793 containerd[1514]: time="2025-07-10T23:54:43.435744899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zv4h7,Uid:4bda0615-d9a8-4ef2-ac3a-8fad441f4e10,Namespace:calico-system,Attempt:0,} returns sandbox id \"9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4\"" Jul 10 23:54:43.439499 systemd[1]: Started cri-containerd-2430099fbd764f20231bdfca9c24538f0d9d3ba79431181f7b1559932b162c49.scope - libcontainer container 2430099fbd764f20231bdfca9c24538f0d9d3ba79431181f7b1559932b162c49. Jul 10 23:54:43.470215 containerd[1514]: time="2025-07-10T23:54:43.470155749Z" level=info msg="StartContainer for \"2430099fbd764f20231bdfca9c24538f0d9d3ba79431181f7b1559932b162c49\" returns successfully" Jul 10 23:54:43.955356 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Jul 10 23:54:44.013484 containerd[1514]: time="2025-07-10T23:54:44.013378864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f656494b-4knwr,Uid:37d160c2-6211-496a-8f08-d88ca9af73d3,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:54:44.014512 containerd[1514]: time="2025-07-10T23:54:44.014449704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d9w88,Uid:3f82ae53-4310-4884-ac43-08c9352cdd68,Namespace:calico-system,Attempt:0,}" Jul 10 23:54:44.187549 systemd-networkd[1433]: cali1f31a72916e: Link UP Jul 10 23:54:44.188879 systemd-networkd[1433]: cali1f31a72916e: Gained carrier Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.077 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--d9w88-eth0 goldmane-58fd7646b9- calico-system 3f82ae53-4310-4884-ac43-08c9352cdd68 791 0 2025-07-10 23:54:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-d9w88 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1f31a72916e [] [] }} ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9w88" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9w88-" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.077 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9w88" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.114 [INFO][5038] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" HandleID="k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Workload="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.115 [INFO][5038] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" HandleID="k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Workload="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-d9w88", "timestamp":"2025-07-10 23:54:44.114842571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.115 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.115 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.115 [INFO][5038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.134 [INFO][5038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.138 [INFO][5038] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.143 [INFO][5038] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.145 [INFO][5038] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.149 [INFO][5038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.149 [INFO][5038] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.156 [INFO][5038] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.166 [INFO][5038] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.175 [INFO][5038] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.175 [INFO][5038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" host="localhost" Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.175 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:54:44.207184 containerd[1514]: 2025-07-10 23:54:44.175 [INFO][5038] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" HandleID="k8s-pod-network.12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Workload="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" Jul 10 23:54:44.207747 containerd[1514]: 2025-07-10 23:54:44.178 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9w88" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d9w88-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3f82ae53-4310-4884-ac43-08c9352cdd68", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-d9w88", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f31a72916e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:44.207747 containerd[1514]: 2025-07-10 23:54:44.178 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9w88" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" Jul 10 23:54:44.207747 containerd[1514]: 2025-07-10 23:54:44.178 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f31a72916e ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9w88" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" Jul 10 23:54:44.207747 containerd[1514]: 2025-07-10 23:54:44.189 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9w88" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" Jul 10 23:54:44.207747 containerd[1514]: 2025-07-10 23:54:44.189 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9w88" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d9w88-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3f82ae53-4310-4884-ac43-08c9352cdd68", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e", Pod:"goldmane-58fd7646b9-d9w88", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f31a72916e", MAC:"0e:a7:1a:da:7b:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:44.207747 containerd[1514]: 2025-07-10 23:54:44.203 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9w88" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9w88-eth0" Jul 10 23:54:44.251293 containerd[1514]: time="2025-07-10T23:54:44.251067687Z" level=info msg="connecting to shim 12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e" address="unix:///run/containerd/s/59f48cf6075e63eff0957d43e57ca1404b46e8f933ad2dcd643f4da5ff57a4b4" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:44.255969 kubelet[2641]: I0710 23:54:44.255908 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54f656494b-n4vd4" podStartSLOduration=26.475459853 podStartE2EDuration="28.255889688s" podCreationTimestamp="2025-07-10 23:54:16 +0000 UTC" firstStartedPulling="2025-07-10 23:54:41.277507037 +0000 UTC m=+39.357024327" lastFinishedPulling="2025-07-10 23:54:43.057936832 +0000 UTC m=+41.137454162" observedRunningTime="2025-07-10 23:54:44.255026328 +0000 UTC m=+42.334543658" watchObservedRunningTime="2025-07-10 23:54:44.255889688 +0000 UTC m=+42.335407018" Jul 10 23:54:44.275112 kubelet[2641]: I0710 23:54:44.272566 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j77jp" podStartSLOduration=38.272547453 podStartE2EDuration="38.272547453s" podCreationTimestamp="2025-07-10 23:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:54:44.271613253 +0000 UTC m=+42.351130583" watchObservedRunningTime="2025-07-10 23:54:44.272547453 +0000 UTC m=+42.352064783" Jul 10 23:54:44.310948 systemd-networkd[1433]: cali90d7b253f7e: Link UP Jul 10 23:54:44.313446 systemd-networkd[1433]: cali90d7b253f7e: Gained carrier Jul 10 23:54:44.330312 systemd[1]: Started cri-containerd-12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e.scope - libcontainer container 12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e. Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.092 [INFO][5013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0 calico-apiserver-54f656494b- calico-apiserver 37d160c2-6211-496a-8f08-d88ca9af73d3 784 0 2025-07-10 23:54:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f656494b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f656494b-4knwr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90d7b253f7e [] [] }} ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-4knwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--4knwr-" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.093 [INFO][5013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-4knwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.137 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" HandleID="k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Workload="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.137 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" HandleID="k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Workload="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042c360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f656494b-4knwr", "timestamp":"2025-07-10 23:54:44.137284937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.137 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.175 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.177 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.232 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.243 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.253 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.265 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.277 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.277 [INFO][5044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.283 [INFO][5044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6 Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.291 [INFO][5044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.302 [INFO][5044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.302 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" host="localhost" Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.302 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:54:44.347745 containerd[1514]: 2025-07-10 23:54:44.302 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" HandleID="k8s-pod-network.2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Workload="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" Jul 10 23:54:44.348397 containerd[1514]: 2025-07-10 23:54:44.306 [INFO][5013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-4knwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0", GenerateName:"calico-apiserver-54f656494b-", Namespace:"calico-apiserver", SelfLink:"", UID:"37d160c2-6211-496a-8f08-d88ca9af73d3", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f656494b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f656494b-4knwr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90d7b253f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:44.348397 containerd[1514]: 2025-07-10 23:54:44.307 [INFO][5013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-4knwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" Jul 10 23:54:44.348397 containerd[1514]: 2025-07-10 23:54:44.307 [INFO][5013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90d7b253f7e ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-4knwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" Jul 10 23:54:44.348397 containerd[1514]: 2025-07-10 23:54:44.314 [INFO][5013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-4knwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" Jul 10 23:54:44.348397 containerd[1514]: 2025-07-10 23:54:44.316 [INFO][5013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-4knwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0", GenerateName:"calico-apiserver-54f656494b-", Namespace:"calico-apiserver", SelfLink:"", UID:"37d160c2-6211-496a-8f08-d88ca9af73d3", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f656494b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6", Pod:"calico-apiserver-54f656494b-4knwr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90d7b253f7e", MAC:"2a:38:6b:88:a8:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:54:44.348397 containerd[1514]: 2025-07-10 23:54:44.330 [INFO][5013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" Namespace="calico-apiserver" Pod="calico-apiserver-54f656494b-4knwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f656494b--4knwr-eth0" Jul 10 23:54:44.352254 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:54:44.385999 containerd[1514]: time="2025-07-10T23:54:44.384895123Z" level=info msg="connecting to shim 2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6" address="unix:///run/containerd/s/1201d871ff649f620a00c3bba9585760591ce98bf584e715cb115b7f07a6c9a1" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:54:44.397767 containerd[1514]: time="2025-07-10T23:54:44.397718526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d9w88,Uid:3f82ae53-4310-4884-ac43-08c9352cdd68,Namespace:calico-system,Attempt:0,} returns sandbox id \"12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e\"" Jul 10 23:54:44.423440 systemd[1]: Started cri-containerd-2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6.scope - libcontainer container 2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6. Jul 10 23:54:44.445209 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:54:44.475599 containerd[1514]: time="2025-07-10T23:54:44.474814307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f656494b-4knwr,Uid:37d160c2-6211-496a-8f08-d88ca9af73d3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6\"" Jul 10 23:54:44.480436 containerd[1514]: time="2025-07-10T23:54:44.480292188Z" level=info msg="CreateContainer within sandbox \"2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 23:54:44.495098 containerd[1514]: time="2025-07-10T23:54:44.495053832Z" level=info msg="Container d3a62f31f538ac474868f5a1193d9380a21ce08a4250931cd0b62f69cc6827e5: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:44.504693 containerd[1514]: time="2025-07-10T23:54:44.504612595Z" level=info msg="CreateContainer within sandbox \"2b8bd867322c4196edf04fbaad1c392be0438277bb2c779853ed19daf57913f6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3a62f31f538ac474868f5a1193d9380a21ce08a4250931cd0b62f69cc6827e5\"" Jul 10 23:54:44.505844 containerd[1514]: time="2025-07-10T23:54:44.505500075Z" level=info msg="StartContainer for \"d3a62f31f538ac474868f5a1193d9380a21ce08a4250931cd0b62f69cc6827e5\"" Jul 10 23:54:44.507116 containerd[1514]: time="2025-07-10T23:54:44.507084756Z" level=info msg="connecting to shim d3a62f31f538ac474868f5a1193d9380a21ce08a4250931cd0b62f69cc6827e5" address="unix:///run/containerd/s/1201d871ff649f620a00c3bba9585760591ce98bf584e715cb115b7f07a6c9a1" protocol=ttrpc version=3 Jul 10 23:54:44.535399 systemd[1]: Started cri-containerd-d3a62f31f538ac474868f5a1193d9380a21ce08a4250931cd0b62f69cc6827e5.scope - libcontainer container d3a62f31f538ac474868f5a1193d9380a21ce08a4250931cd0b62f69cc6827e5. Jul 10 23:54:44.595386 systemd-networkd[1433]: cali0eca79bddeb: Gained IPv6LL Jul 10 23:54:44.599803 containerd[1514]: time="2025-07-10T23:54:44.599763340Z" level=info msg="StartContainer for \"d3a62f31f538ac474868f5a1193d9380a21ce08a4250931cd0b62f69cc6827e5\" returns successfully" Jul 10 23:54:45.114093 containerd[1514]: time="2025-07-10T23:54:45.113951716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:45.114828 containerd[1514]: time="2025-07-10T23:54:45.114781756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 10 23:54:45.115980 containerd[1514]: time="2025-07-10T23:54:45.115930516Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:45.118248 containerd[1514]: time="2025-07-10T23:54:45.118221317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:45.119111 containerd[1514]: time="2025-07-10T23:54:45.118990997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.060883445s" Jul 10 23:54:45.119111 containerd[1514]: time="2025-07-10T23:54:45.119022757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 10 23:54:45.120334 containerd[1514]: time="2025-07-10T23:54:45.120299237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 23:54:45.126711 containerd[1514]: time="2025-07-10T23:54:45.126669239Z" level=info msg="CreateContainer within sandbox \"8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 23:54:45.135083 containerd[1514]: time="2025-07-10T23:54:45.134261001Z" level=info msg="Container a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:45.142010 containerd[1514]: time="2025-07-10T23:54:45.141965203Z" level=info msg="CreateContainer within sandbox \"8cdec09779d0ff25c1ed45a626193fb347ad4b4070456579811aea373ca4c700\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca\"" Jul 10 23:54:45.144186 containerd[1514]: time="2025-07-10T23:54:45.144122723Z" level=info msg="StartContainer for \"a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca\"" Jul 10 23:54:45.146029 containerd[1514]: time="2025-07-10T23:54:45.145953524Z" level=info msg="connecting to shim a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca" address="unix:///run/containerd/s/7a7104db10674bbb2ee4ed3ff7ae4485d85b8c6622c3f604de9e9eb36bd92afe" protocol=ttrpc version=3 Jul 10 23:54:45.166314 systemd[1]: Started cri-containerd-a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca.scope - libcontainer container a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca. Jul 10 23:54:45.202034 containerd[1514]: time="2025-07-10T23:54:45.201998458Z" level=info msg="StartContainer for \"a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca\" returns successfully" Jul 10 23:54:45.237262 systemd-networkd[1433]: calibb291fe941a: Gained IPv6LL Jul 10 23:54:45.253225 kubelet[2641]: I0710 23:54:45.253190 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:54:45.276416 kubelet[2641]: I0710 23:54:45.276335 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54f656494b-4knwr" podStartSLOduration=29.276301317 podStartE2EDuration="29.276301317s" podCreationTimestamp="2025-07-10 23:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:54:45.275223916 +0000 UTC m=+43.354741246" watchObservedRunningTime="2025-07-10 23:54:45.276301317 +0000 UTC m=+43.355818647" Jul 10 23:54:45.277309 kubelet[2641]: I0710 23:54:45.276952 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84b8d4f545-qlpwt" podStartSLOduration=21.521536585 podStartE2EDuration="25.276940317s" podCreationTimestamp="2025-07-10 23:54:20 +0000 UTC" firstStartedPulling="2025-07-10 23:54:41.364540545 +0000 UTC m=+39.444057875" lastFinishedPulling="2025-07-10 23:54:45.119944277 +0000 UTC m=+43.199461607" observedRunningTime="2025-07-10 23:54:45.263583153 +0000 UTC m=+43.343100523" watchObservedRunningTime="2025-07-10 23:54:45.276940317 +0000 UTC m=+43.356457607" Jul 10 23:54:45.392796 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:53612.service - OpenSSH per-connection server daemon (10.0.0.1:53612). Jul 10 23:54:45.466944 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 53612 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:45.469626 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:45.474786 systemd-logind[1488]: New session 9 of user core. Jul 10 23:54:45.486404 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 23:54:45.492264 systemd-networkd[1433]: cali1f31a72916e: Gained IPv6LL Jul 10 23:54:45.767110 sshd[5264]: Connection closed by 10.0.0.1 port 53612 Jul 10 23:54:45.768353 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:45.775136 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. Jul 10 23:54:45.775257 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:53612.service: Deactivated successfully. Jul 10 23:54:45.779896 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 23:54:45.786285 systemd-logind[1488]: Removed session 9. Jul 10 23:54:46.119082 containerd[1514]: time="2025-07-10T23:54:46.118928526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:46.119945 containerd[1514]: time="2025-07-10T23:54:46.119890126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 10 23:54:46.120606 containerd[1514]: time="2025-07-10T23:54:46.120564486Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:46.122499 containerd[1514]: time="2025-07-10T23:54:46.122449007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:46.123081 containerd[1514]: time="2025-07-10T23:54:46.123049807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.00270777s" Jul 10 23:54:46.123140 containerd[1514]: time="2025-07-10T23:54:46.123082727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 10 23:54:46.124259 containerd[1514]: time="2025-07-10T23:54:46.124182367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 23:54:46.129060 containerd[1514]: time="2025-07-10T23:54:46.129020808Z" level=info msg="CreateContainer within sandbox \"9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 23:54:46.141232 containerd[1514]: time="2025-07-10T23:54:46.139755411Z" level=info msg="Container a796754ba4b57949b9cf721d0e361a077c9afe09a1cbee2ce95e9cd2ad2a9139: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:46.171694 containerd[1514]: time="2025-07-10T23:54:46.171631738Z" level=info msg="CreateContainer within sandbox \"9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a796754ba4b57949b9cf721d0e361a077c9afe09a1cbee2ce95e9cd2ad2a9139\"" Jul 10 23:54:46.173698 containerd[1514]: time="2025-07-10T23:54:46.173649259Z" level=info msg="StartContainer for \"a796754ba4b57949b9cf721d0e361a077c9afe09a1cbee2ce95e9cd2ad2a9139\"" Jul 10 23:54:46.176313 containerd[1514]: time="2025-07-10T23:54:46.176265499Z" level=info msg="connecting to shim a796754ba4b57949b9cf721d0e361a077c9afe09a1cbee2ce95e9cd2ad2a9139" address="unix:///run/containerd/s/1287fb2f0c35abdf22ca28dfb16054ef77908266cca2277f978735c35f5d52cb" protocol=ttrpc version=3 Jul 10 23:54:46.199257 systemd-networkd[1433]: cali90d7b253f7e: Gained IPv6LL Jul 10 23:54:46.241408 systemd[1]: Started cri-containerd-a796754ba4b57949b9cf721d0e361a077c9afe09a1cbee2ce95e9cd2ad2a9139.scope - libcontainer container a796754ba4b57949b9cf721d0e361a077c9afe09a1cbee2ce95e9cd2ad2a9139. Jul 10 23:54:46.257184 kubelet[2641]: I0710 23:54:46.257142 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:54:46.257184 kubelet[2641]: I0710 23:54:46.257143 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:54:46.306765 containerd[1514]: time="2025-07-10T23:54:46.306725570Z" level=info msg="StartContainer for \"a796754ba4b57949b9cf721d0e361a077c9afe09a1cbee2ce95e9cd2ad2a9139\" returns successfully" Jul 10 23:54:47.682465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238217932.mount: Deactivated successfully. Jul 10 23:54:48.024033 containerd[1514]: time="2025-07-10T23:54:48.023990118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:48.024764 containerd[1514]: time="2025-07-10T23:54:48.024731678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 10 23:54:48.026189 containerd[1514]: time="2025-07-10T23:54:48.026123038Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:48.028507 containerd[1514]: time="2025-07-10T23:54:48.028476839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:48.029145 containerd[1514]: time="2025-07-10T23:54:48.029110199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.904900072s" Jul 10 23:54:48.029217 containerd[1514]: time="2025-07-10T23:54:48.029146519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 10 23:54:48.031090 containerd[1514]: time="2025-07-10T23:54:48.031007519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 23:54:48.032389 containerd[1514]: time="2025-07-10T23:54:48.032361280Z" level=info msg="CreateContainer within sandbox \"12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 23:54:48.040724 containerd[1514]: time="2025-07-10T23:54:48.040676761Z" level=info msg="Container c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:48.049392 containerd[1514]: time="2025-07-10T23:54:48.049346843Z" level=info msg="CreateContainer within sandbox \"12e585d7f1c26a2decbe0c6acae0f341c6f25f818a2ad6f714b4dbbf78e6097e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1\"" Jul 10 23:54:48.050866 containerd[1514]: time="2025-07-10T23:54:48.050840043Z" level=info msg="StartContainer for \"c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1\"" Jul 10 23:54:48.053316 containerd[1514]: time="2025-07-10T23:54:48.053273324Z" level=info msg="connecting to shim c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1" address="unix:///run/containerd/s/59f48cf6075e63eff0957d43e57ca1404b46e8f933ad2dcd643f4da5ff57a4b4" protocol=ttrpc version=3 Jul 10 23:54:48.077738 systemd[1]: Started cri-containerd-c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1.scope - libcontainer container c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1. Jul 10 23:54:48.130981 containerd[1514]: time="2025-07-10T23:54:48.130943380Z" level=info msg="StartContainer for \"c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1\" returns successfully" Jul 10 23:54:48.289579 kubelet[2641]: I0710 23:54:48.288905 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-d9w88" podStartSLOduration=24.65762914 podStartE2EDuration="28.288887732s" podCreationTimestamp="2025-07-10 23:54:20 +0000 UTC" firstStartedPulling="2025-07-10 23:54:44.399425567 +0000 UTC m=+42.478942897" lastFinishedPulling="2025-07-10 23:54:48.030684159 +0000 UTC m=+46.110201489" observedRunningTime="2025-07-10 23:54:48.287671332 +0000 UTC m=+46.367188662" watchObservedRunningTime="2025-07-10 23:54:48.288887732 +0000 UTC m=+46.368405062" Jul 10 23:54:48.426668 containerd[1514]: time="2025-07-10T23:54:48.426630681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1\" id:\"f8c8dd85d1f618c49f4a6eb6febe43e4461df938862b6e9efb721c3eb648e530\" pid:5386 exit_status:1 exited_at:{seconds:1752191688 nanos:426284081}" Jul 10 23:54:49.179059 containerd[1514]: time="2025-07-10T23:54:49.178995194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:49.180148 containerd[1514]: time="2025-07-10T23:54:49.180095874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 10 23:54:49.181297 containerd[1514]: time="2025-07-10T23:54:49.181245034Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:49.183305 containerd[1514]: time="2025-07-10T23:54:49.183136355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:54:49.184086 containerd[1514]: time="2025-07-10T23:54:49.184043875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.153005876s" Jul 10 23:54:49.184086 containerd[1514]: time="2025-07-10T23:54:49.184080435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 10 23:54:49.187193 containerd[1514]: time="2025-07-10T23:54:49.187150795Z" level=info msg="CreateContainer within sandbox \"9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 23:54:49.194188 containerd[1514]: time="2025-07-10T23:54:49.194134997Z" level=info msg="Container 134e4454a0cdb8e64c70179c8b8a4f28f7e0c2c9d0d121c808711cd44921e1ac: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:54:49.203904 containerd[1514]: time="2025-07-10T23:54:49.203846359Z" level=info msg="CreateContainer within sandbox \"9232dca3e075619a5f2cc4cd5918d7a7833736eae96dd43dab3bd97d3ebb3db4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"134e4454a0cdb8e64c70179c8b8a4f28f7e0c2c9d0d121c808711cd44921e1ac\"" Jul 10 23:54:49.204544 containerd[1514]: time="2025-07-10T23:54:49.204517519Z" level=info msg="StartContainer for \"134e4454a0cdb8e64c70179c8b8a4f28f7e0c2c9d0d121c808711cd44921e1ac\"" Jul 10 23:54:49.206400 containerd[1514]: time="2025-07-10T23:54:49.206359359Z" level=info msg="connecting to shim 134e4454a0cdb8e64c70179c8b8a4f28f7e0c2c9d0d121c808711cd44921e1ac" address="unix:///run/containerd/s/1287fb2f0c35abdf22ca28dfb16054ef77908266cca2277f978735c35f5d52cb" protocol=ttrpc version=3 Jul 10 23:54:49.231421 systemd[1]: Started cri-containerd-134e4454a0cdb8e64c70179c8b8a4f28f7e0c2c9d0d121c808711cd44921e1ac.scope - libcontainer container 134e4454a0cdb8e64c70179c8b8a4f28f7e0c2c9d0d121c808711cd44921e1ac. Jul 10 23:54:49.272077 containerd[1514]: time="2025-07-10T23:54:49.272004612Z" level=info msg="StartContainer for \"134e4454a0cdb8e64c70179c8b8a4f28f7e0c2c9d0d121c808711cd44921e1ac\" returns successfully" Jul 10 23:54:49.294002 kubelet[2641]: I0710 23:54:49.293937 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zv4h7" podStartSLOduration=23.546096401 podStartE2EDuration="29.293917096s" podCreationTimestamp="2025-07-10 23:54:20 +0000 UTC" firstStartedPulling="2025-07-10 23:54:43.43695134 +0000 UTC m=+41.516468670" lastFinishedPulling="2025-07-10 23:54:49.184772035 +0000 UTC m=+47.264289365" observedRunningTime="2025-07-10 23:54:49.292516256 +0000 UTC m=+47.372033626" watchObservedRunningTime="2025-07-10 23:54:49.293917096 +0000 UTC m=+47.373434426" Jul 10 23:54:49.350497 containerd[1514]: time="2025-07-10T23:54:49.350457827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1\" id:\"548587f70662a8984d1aec83c799e05250b4986964bf2e1aef5162e1a2e2b735\" pid:5443 exit_status:1 exited_at:{seconds:1752191689 nanos:350143627}" Jul 10 23:54:50.105392 kubelet[2641]: I0710 23:54:50.105293 2641 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 23:54:50.111606 kubelet[2641]: I0710 23:54:50.111568 2641 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 23:54:50.779523 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:53628.service - OpenSSH per-connection server daemon (10.0.0.1:53628). Jul 10 23:54:50.849460 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 53628 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:50.851156 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:50.855028 systemd-logind[1488]: New session 10 of user core. Jul 10 23:54:50.868334 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 23:54:51.100672 sshd[5462]: Connection closed by 10.0.0.1 port 53628 Jul 10 23:54:51.101715 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:51.110820 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:53628.service: Deactivated successfully. Jul 10 23:54:51.114552 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 23:54:51.116906 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. Jul 10 23:54:51.119459 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:53632.service - OpenSSH per-connection server daemon (10.0.0.1:53632). Jul 10 23:54:51.121248 systemd-logind[1488]: Removed session 10. Jul 10 23:54:51.179414 sshd[5476]: Accepted publickey for core from 10.0.0.1 port 53632 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:51.180560 sshd-session[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:51.184698 systemd-logind[1488]: New session 11 of user core. Jul 10 23:54:51.201371 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 23:54:51.416642 sshd[5478]: Connection closed by 10.0.0.1 port 53632 Jul 10 23:54:51.418193 sshd-session[5476]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:51.425910 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:53632.service: Deactivated successfully. Jul 10 23:54:51.427931 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 23:54:51.429896 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. Jul 10 23:54:51.434902 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:53642.service - OpenSSH per-connection server daemon (10.0.0.1:53642). Jul 10 23:54:51.438232 systemd-logind[1488]: Removed session 11. Jul 10 23:54:51.482114 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 53642 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:51.483361 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:51.487362 systemd-logind[1488]: New session 12 of user core. Jul 10 23:54:51.494345 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 23:54:51.653263 sshd[5496]: Connection closed by 10.0.0.1 port 53642 Jul 10 23:54:51.653592 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:51.657039 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. Jul 10 23:54:51.657651 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:53642.service: Deactivated successfully. Jul 10 23:54:51.659614 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 23:54:51.661848 systemd-logind[1488]: Removed session 12. Jul 10 23:54:56.672353 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:57426.service - OpenSSH per-connection server daemon (10.0.0.1:57426). Jul 10 23:54:56.725296 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 57426 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:56.726863 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:56.733317 systemd-logind[1488]: New session 13 of user core. Jul 10 23:54:56.748373 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 23:54:56.884454 sshd[5525]: Connection closed by 10.0.0.1 port 57426 Jul 10 23:54:56.884826 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:56.897616 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:57426.service: Deactivated successfully. Jul 10 23:54:56.900762 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 23:54:56.901680 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. Jul 10 23:54:56.903850 systemd-logind[1488]: Removed session 13. Jul 10 23:54:56.905825 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:57442.service - OpenSSH per-connection server daemon (10.0.0.1:57442). Jul 10 23:54:56.942195 kubelet[2641]: I0710 23:54:56.941984 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:54:56.970860 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 57442 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:56.972210 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:56.978375 systemd-logind[1488]: New session 14 of user core. Jul 10 23:54:56.983359 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 23:54:56.996269 containerd[1514]: time="2025-07-10T23:54:56.996226047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca\" id:\"fc411f22992594955826efb1d8fd641e7b7132b7f44cb175594927bcf728a9f9\" pid:5553 exited_at:{seconds:1752191696 nanos:984327406}" Jul 10 23:54:57.033830 containerd[1514]: time="2025-07-10T23:54:57.033625372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4ada7b58b60d5eec3423fc7f9e6842871bfce43d1d0f99fb57ba427525d97ca\" id:\"bcc72b2bdfedbecb4c40d8f07c125fcbb6247719357d48c5a6110e12fb3ff263\" pid:5576 exited_at:{seconds:1752191697 nanos:33047052}" Jul 10 23:54:57.205269 sshd[5563]: Connection closed by 10.0.0.1 port 57442 Jul 10 23:54:57.205842 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:57.216561 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:57442.service: Deactivated successfully. Jul 10 23:54:57.218361 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 23:54:57.221144 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. Jul 10 23:54:57.222866 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:57448.service - OpenSSH per-connection server daemon (10.0.0.1:57448). Jul 10 23:54:57.223770 systemd-logind[1488]: Removed session 14. Jul 10 23:54:57.281675 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 57448 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:57.283097 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:57.287767 systemd-logind[1488]: New session 15 of user core. Jul 10 23:54:57.296364 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 23:54:58.999758 sshd[5600]: Connection closed by 10.0.0.1 port 57448 Jul 10 23:54:59.000085 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:59.010935 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:57448.service: Deactivated successfully. Jul 10 23:54:59.012768 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 23:54:59.013145 systemd[1]: session-15.scope: Consumed 532ms CPU time, 72.4M memory peak. Jul 10 23:54:59.014032 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. Jul 10 23:54:59.018339 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:57464.service - OpenSSH per-connection server daemon (10.0.0.1:57464). Jul 10 23:54:59.022234 systemd-logind[1488]: Removed session 15. Jul 10 23:54:59.095760 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 57464 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:59.097289 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:59.102398 systemd-logind[1488]: New session 16 of user core. Jul 10 23:54:59.115411 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 23:54:59.388989 sshd[5626]: Connection closed by 10.0.0.1 port 57464 Jul 10 23:54:59.389012 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:59.401127 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:57464.service: Deactivated successfully. Jul 10 23:54:59.405461 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 23:54:59.408252 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. Jul 10 23:54:59.411867 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:57466.service - OpenSSH per-connection server daemon (10.0.0.1:57466). Jul 10 23:54:59.412516 systemd-logind[1488]: Removed session 16. Jul 10 23:54:59.472753 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 57466 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:54:59.474252 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:54:59.480409 systemd-logind[1488]: New session 17 of user core. Jul 10 23:54:59.489410 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 23:54:59.573077 containerd[1514]: time="2025-07-10T23:54:59.573032210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f0431cfa5cda0693b7bcb7f2178d682a2df85f4f28f74493889ef30e271779\" id:\"4b775e729c685c7c919fcf99edef453c029dff19ca4f25a36d99e91d4dd041ee\" pid:5653 exited_at:{seconds:1752191699 nanos:572434130}" Jul 10 23:54:59.676797 sshd[5640]: Connection closed by 10.0.0.1 port 57466 Jul 10 23:54:59.677521 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Jul 10 23:54:59.681867 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. Jul 10 23:54:59.682110 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:57466.service: Deactivated successfully. Jul 10 23:54:59.683846 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 23:54:59.685784 systemd-logind[1488]: Removed session 17. Jul 10 23:55:04.688245 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:56908.service - OpenSSH per-connection server daemon (10.0.0.1:56908). Jul 10 23:55:04.751274 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 56908 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:55:04.752661 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:55:04.757223 systemd-logind[1488]: New session 18 of user core. Jul 10 23:55:04.768345 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 23:55:04.903389 sshd[5689]: Connection closed by 10.0.0.1 port 56908 Jul 10 23:55:04.903591 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Jul 10 23:55:04.907827 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:56908.service: Deactivated successfully. Jul 10 23:55:04.909550 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 23:55:04.911205 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. Jul 10 23:55:04.913096 systemd-logind[1488]: Removed session 18. Jul 10 23:55:06.453609 kubelet[2641]: I0710 23:55:06.453557 2641 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:55:09.915866 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:56922.service - OpenSSH per-connection server daemon (10.0.0.1:56922). Jul 10 23:55:09.983445 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 56922 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:55:09.984910 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:55:09.990936 systemd-logind[1488]: New session 19 of user core. Jul 10 23:55:09.999443 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 23:55:10.176297 sshd[5712]: Connection closed by 10.0.0.1 port 56922 Jul 10 23:55:10.176110 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Jul 10 23:55:10.181004 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:56922.service: Deactivated successfully. Jul 10 23:55:10.184016 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 23:55:10.184954 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. Jul 10 23:55:10.188567 systemd-logind[1488]: Removed session 19. Jul 10 23:55:12.594542 containerd[1514]: time="2025-07-10T23:55:12.594499665Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1\" id:\"7f42d6e40ef5daa277fa5d16e04f3c364b64d05854b5214cd83612103da70985\" pid:5736 exited_at:{seconds:1752191712 nanos:594064504}" Jul 10 23:55:12.939914 containerd[1514]: time="2025-07-10T23:55:12.939694118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c53eb444e99fd6de51f68936e9c1c320c1776ba58ba4467c1f19417aed34d5e1\" id:\"e749497995a8e0f98707f297e27dbdfa319799504c405b13c1a99bb171113529\" pid:5760 exited_at:{seconds:1752191712 nanos:939305116}" Jul 10 23:55:15.191982 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:41074.service - OpenSSH per-connection server daemon (10.0.0.1:41074). Jul 10 23:55:15.264127 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 41074 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:55:15.265720 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:55:15.273453 systemd-logind[1488]: New session 20 of user core. Jul 10 23:55:15.284339 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 23:55:15.468215 sshd[5776]: Connection closed by 10.0.0.1 port 41074 Jul 10 23:55:15.467591 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Jul 10 23:55:15.476089 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:41074.service: Deactivated successfully. Jul 10 23:55:15.479247 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 23:55:15.482776 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. Jul 10 23:55:15.485629 systemd-logind[1488]: Removed session 20.