Jul 10 23:33:29.819695 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 23:33:29.819715 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Jul 10 22:17:59 -00 2025 Jul 10 23:33:29.819724 kernel: KASLR enabled Jul 10 23:33:29.819730 kernel: efi: EFI v2.7 by EDK II Jul 10 23:33:29.819735 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 10 23:33:29.819740 kernel: random: crng init done Jul 10 23:33:29.819747 kernel: secureboot: Secure boot disabled Jul 10 23:33:29.819752 kernel: ACPI: Early table checksum verification disabled Jul 10 23:33:29.819758 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 10 23:33:29.819765 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 23:33:29.819771 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819776 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819782 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819787 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819794 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819801 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819808 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819813 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819819 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:33:29.819825 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 23:33:29.819831 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 10 23:33:29.819837 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:33:29.819843 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] Jul 10 23:33:29.819849 kernel: Zone ranges: Jul 10 23:33:29.819855 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:33:29.819862 kernel: DMA32 empty Jul 10 23:33:29.819868 kernel: Normal empty Jul 10 23:33:29.819874 kernel: Device empty Jul 10 23:33:29.819879 kernel: Movable zone start for each node Jul 10 23:33:29.819885 kernel: Early memory node ranges Jul 10 23:33:29.819891 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 10 23:33:29.819897 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 10 23:33:29.819903 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 10 23:33:29.819909 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 10 23:33:29.819915 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 10 23:33:29.819921 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 10 23:33:29.819926 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 10 23:33:29.819933 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 10 23:33:29.819939 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 10 23:33:29.819945 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 23:33:29.819954 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 23:33:29.819960 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 23:33:29.819967 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 23:33:29.819974 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:33:29.819980 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 23:33:29.819987 kernel: psci: probing for conduit method from ACPI. Jul 10 23:33:29.819993 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 23:33:29.819999 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 23:33:29.820005 kernel: psci: Trusted OS migration not required Jul 10 23:33:29.820012 kernel: psci: SMC Calling Convention v1.1 Jul 10 23:33:29.820018 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 23:33:29.820025 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 10 23:33:29.820031 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 10 23:33:29.820038 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 23:33:29.820045 kernel: Detected PIPT I-cache on CPU0 Jul 10 23:33:29.820051 kernel: CPU features: detected: GIC system register CPU interface Jul 10 23:33:29.820057 kernel: CPU features: detected: Spectre-v4 Jul 10 23:33:29.820063 kernel: CPU features: detected: Spectre-BHB Jul 10 23:33:29.820069 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 23:33:29.820076 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 23:33:29.820082 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 23:33:29.820088 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 23:33:29.820095 kernel: alternatives: applying boot alternatives Jul 10 23:33:29.820102 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9ae0b1f40710648305be8f7e436b6937e65ac0b33eb84d1b5b7411684b4e7538 Jul 10 23:33:29.820110 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 23:33:29.820116 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 23:33:29.820123 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 23:33:29.820129 kernel: Fallback order for Node 0: 0 Jul 10 23:33:29.820135 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 10 23:33:29.820141 kernel: Policy zone: DMA Jul 10 23:33:29.820148 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 23:33:29.820154 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 10 23:33:29.820160 kernel: software IO TLB: area num 4. Jul 10 23:33:29.820166 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 10 23:33:29.820173 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) Jul 10 23:33:29.820179 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 23:33:29.820187 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 23:33:29.820193 kernel: rcu: RCU event tracing is enabled. Jul 10 23:33:29.820200 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 23:33:29.820206 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 23:33:29.820213 kernel: Tracing variant of Tasks RCU enabled. Jul 10 23:33:29.820219 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 23:33:29.820225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 23:33:29.820232 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 23:33:29.820238 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 23:33:29.820245 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 23:33:29.820251 kernel: GICv3: 256 SPIs implemented Jul 10 23:33:29.820258 kernel: GICv3: 0 Extended SPIs implemented Jul 10 23:33:29.820264 kernel: Root IRQ handler: gic_handle_irq Jul 10 23:33:29.820271 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 23:33:29.820277 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 10 23:33:29.820283 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 23:33:29.820289 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 23:33:29.820296 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 10 23:33:29.820302 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 10 23:33:29.820309 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 10 23:33:29.820315 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 10 23:33:29.820321 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 23:33:29.820328 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:33:29.820335 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 23:33:29.820342 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 23:33:29.820348 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 23:33:29.820355 kernel: arm-pv: using stolen time PV Jul 10 23:33:29.820361 kernel: Console: colour dummy device 80x25 Jul 10 23:33:29.820368 kernel: ACPI: Core revision 20240827 Jul 10 23:33:29.820374 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 23:33:29.820381 kernel: pid_max: default: 32768 minimum: 301 Jul 10 23:33:29.820387 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 23:33:29.820395 kernel: landlock: Up and running. Jul 10 23:33:29.820401 kernel: SELinux: Initializing. Jul 10 23:33:29.820407 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:33:29.820414 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:33:29.820421 kernel: rcu: Hierarchical SRCU implementation. Jul 10 23:33:29.820427 kernel: rcu: Max phase no-delay instances is 400. Jul 10 23:33:29.820434 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 23:33:29.820440 kernel: Remapping and enabling EFI services. Jul 10 23:33:29.820446 kernel: smp: Bringing up secondary CPUs ... Jul 10 23:33:29.820453 kernel: Detected PIPT I-cache on CPU1 Jul 10 23:33:29.820465 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 23:33:29.820472 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 10 23:33:29.820480 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:33:29.820487 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 23:33:29.820493 kernel: Detected PIPT I-cache on CPU2 Jul 10 23:33:29.820500 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 23:33:29.820507 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 10 23:33:29.820515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:33:29.820522 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 23:33:29.820529 kernel: Detected PIPT I-cache on CPU3 Jul 10 23:33:29.820535 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 23:33:29.820542 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 10 23:33:29.820549 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:33:29.820555 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 23:33:29.820562 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 23:33:29.820569 kernel: SMP: Total of 4 processors activated. Jul 10 23:33:29.820575 kernel: CPU: All CPU(s) started at EL1 Jul 10 23:33:29.820589 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 23:33:29.820597 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 23:33:29.820604 kernel: CPU features: detected: Common not Private translations Jul 10 23:33:29.820611 kernel: CPU features: detected: CRC32 instructions Jul 10 23:33:29.820618 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 23:33:29.820625 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 23:33:29.820639 kernel: CPU features: detected: LSE atomic instructions Jul 10 23:33:29.820647 kernel: CPU features: detected: Privileged Access Never Jul 10 23:33:29.820654 kernel: CPU features: detected: RAS Extension Support Jul 10 23:33:29.820663 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 23:33:29.820670 kernel: alternatives: applying system-wide alternatives Jul 10 23:33:29.820676 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 10 23:33:29.820684 kernel: Memory: 2440420K/2572288K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 125920K reserved, 0K cma-reserved) Jul 10 23:33:29.820690 kernel: devtmpfs: initialized Jul 10 23:33:29.820697 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 23:33:29.820705 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 23:33:29.820711 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 23:33:29.820718 kernel: 0 pages in range for non-PLT usage Jul 10 23:33:29.820726 kernel: 508448 pages in range for PLT usage Jul 10 23:33:29.820733 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 23:33:29.820739 kernel: SMBIOS 3.0.0 present. Jul 10 23:33:29.820746 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 10 23:33:29.820753 kernel: DMI: Memory slots populated: 1/1 Jul 10 23:33:29.820760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 23:33:29.820766 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 23:33:29.820774 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 23:33:29.820780 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 23:33:29.820788 kernel: audit: initializing netlink subsys (disabled) Jul 10 23:33:29.820795 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 10 23:33:29.820802 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 23:33:29.820809 kernel: cpuidle: using governor menu Jul 10 23:33:29.820816 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 23:33:29.820822 kernel: ASID allocator initialised with 32768 entries Jul 10 23:33:29.820829 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 23:33:29.820836 kernel: Serial: AMBA PL011 UART driver Jul 10 23:33:29.820843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 23:33:29.820851 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 23:33:29.820858 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 23:33:29.820864 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 23:33:29.820871 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 23:33:29.820878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 23:33:29.820884 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 23:33:29.820891 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 23:33:29.820898 kernel: ACPI: Added _OSI(Module Device) Jul 10 23:33:29.820904 kernel: ACPI: Added _OSI(Processor Device) Jul 10 23:33:29.820912 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 23:33:29.820919 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 23:33:29.820926 kernel: ACPI: Interpreter enabled Jul 10 23:33:29.820933 kernel: ACPI: Using GIC for interrupt routing Jul 10 23:33:29.820939 kernel: ACPI: MCFG table detected, 1 entries Jul 10 23:33:29.820946 kernel: ACPI: CPU0 has been hot-added Jul 10 23:33:29.820953 kernel: ACPI: CPU1 has been hot-added Jul 10 23:33:29.820959 kernel: ACPI: CPU2 has been hot-added Jul 10 23:33:29.820966 kernel: ACPI: CPU3 has been hot-added Jul 10 23:33:29.820973 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 23:33:29.820981 kernel: printk: legacy console [ttyAMA0] enabled Jul 10 23:33:29.820988 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 23:33:29.821113 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 23:33:29.821177 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 23:33:29.821236 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 23:33:29.821294 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 23:33:29.821351 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 23:33:29.821362 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 23:33:29.821369 kernel: PCI host bridge to bus 0000:00 Jul 10 23:33:29.821431 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 23:33:29.821487 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 23:33:29.821538 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 23:33:29.821603 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 23:33:29.821710 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 10 23:33:29.821786 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 23:33:29.821847 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 10 23:33:29.821907 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 10 23:33:29.821966 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 23:33:29.822025 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 10 23:33:29.822084 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 10 23:33:29.822145 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 10 23:33:29.822198 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 23:33:29.822250 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 23:33:29.822302 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 23:33:29.822310 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 23:33:29.822318 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 23:33:29.822325 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 23:33:29.822331 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 23:33:29.822340 kernel: iommu: Default domain type: Translated Jul 10 23:33:29.822347 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 23:33:29.822354 kernel: efivars: Registered efivars operations Jul 10 23:33:29.822360 kernel: vgaarb: loaded Jul 10 23:33:29.822367 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 23:33:29.822374 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 23:33:29.822381 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 23:33:29.822388 kernel: pnp: PnP ACPI init Jul 10 23:33:29.822453 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 23:33:29.822464 kernel: pnp: PnP ACPI: found 1 devices Jul 10 23:33:29.822471 kernel: NET: Registered PF_INET protocol family Jul 10 23:33:29.822478 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 23:33:29.822485 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 23:33:29.822492 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 23:33:29.822498 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 23:33:29.822505 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 23:33:29.822512 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 23:33:29.822520 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:33:29.822528 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:33:29.822534 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 23:33:29.822541 kernel: PCI: CLS 0 bytes, default 64 Jul 10 23:33:29.822548 kernel: kvm [1]: HYP mode not available Jul 10 23:33:29.822555 kernel: Initialise system trusted keyrings Jul 10 23:33:29.822561 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 23:33:29.822568 kernel: Key type asymmetric registered Jul 10 23:33:29.822575 kernel: Asymmetric key parser 'x509' registered Jul 10 23:33:29.822590 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 23:33:29.822598 kernel: io scheduler mq-deadline registered Jul 10 23:33:29.822604 kernel: io scheduler kyber registered Jul 10 23:33:29.822611 kernel: io scheduler bfq registered Jul 10 23:33:29.822618 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 23:33:29.822625 kernel: ACPI: button: Power Button [PWRB] Jul 10 23:33:29.822648 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 23:33:29.822723 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 23:33:29.822733 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 23:33:29.822743 kernel: thunder_xcv, ver 1.0 Jul 10 23:33:29.822750 kernel: thunder_bgx, ver 1.0 Jul 10 23:33:29.822756 kernel: nicpf, ver 1.0 Jul 10 23:33:29.822763 kernel: nicvf, ver 1.0 Jul 10 23:33:29.822840 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 23:33:29.822898 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T23:33:29 UTC (1752190409) Jul 10 23:33:29.822907 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 23:33:29.822915 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 10 23:33:29.822925 kernel: watchdog: NMI not fully supported Jul 10 23:33:29.822932 kernel: watchdog: Hard watchdog permanently disabled Jul 10 23:33:29.822939 kernel: NET: Registered PF_INET6 protocol family Jul 10 23:33:29.822946 kernel: Segment Routing with IPv6 Jul 10 23:33:29.822953 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 23:33:29.822960 kernel: NET: Registered PF_PACKET protocol family Jul 10 23:33:29.822967 kernel: Key type dns_resolver registered Jul 10 23:33:29.822974 kernel: registered taskstats version 1 Jul 10 23:33:29.822981 kernel: Loading compiled-in X.509 certificates Jul 10 23:33:29.822988 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0718d62a7a0702c0da490764fdc6ec06d7382bc1' Jul 10 23:33:29.822996 kernel: Demotion targets for Node 0: null Jul 10 23:33:29.823004 kernel: Key type .fscrypt registered Jul 10 23:33:29.823010 kernel: Key type fscrypt-provisioning registered Jul 10 23:33:29.823017 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 23:33:29.823024 kernel: ima: Allocated hash algorithm: sha1 Jul 10 23:33:29.823031 kernel: ima: No architecture policies found Jul 10 23:33:29.823038 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 23:33:29.823045 kernel: clk: Disabling unused clocks Jul 10 23:33:29.823053 kernel: PM: genpd: Disabling unused power domains Jul 10 23:33:29.823060 kernel: Warning: unable to open an initial console. Jul 10 23:33:29.823067 kernel: Freeing unused kernel memory: 39488K Jul 10 23:33:29.823074 kernel: Run /init as init process Jul 10 23:33:29.823080 kernel: with arguments: Jul 10 23:33:29.823087 kernel: /init Jul 10 23:33:29.823094 kernel: with environment: Jul 10 23:33:29.823100 kernel: HOME=/ Jul 10 23:33:29.823107 kernel: TERM=linux Jul 10 23:33:29.823114 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 23:33:29.823122 systemd[1]: Successfully made /usr/ read-only. Jul 10 23:33:29.823132 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:33:29.823140 systemd[1]: Detected virtualization kvm. Jul 10 23:33:29.823147 systemd[1]: Detected architecture arm64. Jul 10 23:33:29.823154 systemd[1]: Running in initrd. Jul 10 23:33:29.823161 systemd[1]: No hostname configured, using default hostname. Jul 10 23:33:29.823169 systemd[1]: Hostname set to . Jul 10 23:33:29.823177 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:33:29.823184 systemd[1]: Queued start job for default target initrd.target. Jul 10 23:33:29.823191 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:33:29.823198 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:33:29.823206 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 23:33:29.823214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:33:29.823221 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 23:33:29.823230 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 23:33:29.823239 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 23:33:29.823246 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 23:33:29.823254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:33:29.823261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:33:29.823268 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:33:29.823276 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:33:29.823284 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:33:29.823291 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:33:29.823299 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:33:29.823306 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:33:29.823314 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 23:33:29.823321 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 23:33:29.823328 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:33:29.823336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:33:29.823343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:33:29.823352 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:33:29.823359 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 23:33:29.823367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:33:29.823374 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 23:33:29.823382 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 23:33:29.823389 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 23:33:29.823397 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:33:29.823404 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:33:29.823413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:33:29.823420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:33:29.823428 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 23:33:29.823435 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 23:33:29.823443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:33:29.823465 systemd-journald[245]: Collecting audit messages is disabled. Jul 10 23:33:29.823484 systemd-journald[245]: Journal started Jul 10 23:33:29.823503 systemd-journald[245]: Runtime Journal (/run/log/journal/d1660ced5fc643b0954cb067c0914cb4) is 6M, max 48.5M, 42.4M free. Jul 10 23:33:29.816307 systemd-modules-load[246]: Inserted module 'overlay' Jul 10 23:33:29.828259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:33:29.830646 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:33:29.834309 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 23:33:29.832736 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:33:29.836784 kernel: Bridge firewalling registered Jul 10 23:33:29.834795 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:33:29.836169 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 10 23:33:29.840773 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:33:29.841772 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:33:29.844652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:33:29.845685 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 23:33:29.845866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:33:29.848759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:33:29.854716 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:33:29.856275 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 23:33:29.857468 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:33:29.859241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:33:29.873803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:33:29.888504 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9ae0b1f40710648305be8f7e436b6937e65ac0b33eb84d1b5b7411684b4e7538 Jul 10 23:33:29.908157 systemd-resolved[287]: Positive Trust Anchors: Jul 10 23:33:29.908174 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:33:29.908205 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:33:29.912877 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 10 23:33:29.919417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:33:29.923280 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:33:29.988682 kernel: SCSI subsystem initialized Jul 10 23:33:29.993659 kernel: Loading iSCSI transport class v2.0-870. Jul 10 23:33:30.000669 kernel: iscsi: registered transport (tcp) Jul 10 23:33:30.018604 kernel: iscsi: registered transport (qla4xxx) Jul 10 23:33:30.018682 kernel: QLogic iSCSI HBA Driver Jul 10 23:33:30.037214 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:33:30.052160 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:33:30.054070 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:33:30.104694 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 23:33:30.106979 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 23:33:30.173736 kernel: raid6: neonx8 gen() 15770 MB/s Jul 10 23:33:30.190685 kernel: raid6: neonx4 gen() 15826 MB/s Jul 10 23:33:30.207729 kernel: raid6: neonx2 gen() 13192 MB/s Jul 10 23:33:30.224782 kernel: raid6: neonx1 gen() 10412 MB/s Jul 10 23:33:30.241682 kernel: raid6: int64x8 gen() 6906 MB/s Jul 10 23:33:30.258693 kernel: raid6: int64x4 gen() 7352 MB/s Jul 10 23:33:30.275843 kernel: raid6: int64x2 gen() 6105 MB/s Jul 10 23:33:30.292695 kernel: raid6: int64x1 gen() 5056 MB/s Jul 10 23:33:30.292766 kernel: raid6: using algorithm neonx4 gen() 15826 MB/s Jul 10 23:33:30.309689 kernel: raid6: .... xor() 12328 MB/s, rmw enabled Jul 10 23:33:30.309755 kernel: raid6: using neon recovery algorithm Jul 10 23:33:30.314784 kernel: xor: measuring software checksum speed Jul 10 23:33:30.314837 kernel: 8regs : 21636 MB/sec Jul 10 23:33:30.315802 kernel: 32regs : 21693 MB/sec Jul 10 23:33:30.315829 kernel: arm64_neon : 28303 MB/sec Jul 10 23:33:30.315838 kernel: xor: using function: arm64_neon (28303 MB/sec) Jul 10 23:33:30.371691 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 23:33:30.380668 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:33:30.382904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:33:30.406503 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jul 10 23:33:30.410533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:33:30.412209 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 23:33:30.441992 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jul 10 23:33:30.466946 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:33:30.469018 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:33:30.529671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:33:30.531744 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 23:33:30.588161 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 23:33:30.588324 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 23:33:30.595693 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 23:33:30.595738 kernel: GPT:9289727 != 19775487 Jul 10 23:33:30.595748 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 23:33:30.595757 kernel: GPT:9289727 != 19775487 Jul 10 23:33:30.596848 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 23:33:30.596888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:33:30.597799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:33:30.597925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:33:30.599485 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:33:30.601798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:33:30.637485 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 23:33:30.638793 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 23:33:30.641337 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 23:33:30.644081 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:33:30.655352 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 23:33:30.663985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 23:33:30.672508 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 23:33:30.673796 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:33:30.675679 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:33:30.677469 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:33:30.679915 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 23:33:30.681696 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 23:33:30.706904 disk-uuid[593]: Primary Header is updated. Jul 10 23:33:30.706904 disk-uuid[593]: Secondary Entries is updated. Jul 10 23:33:30.706904 disk-uuid[593]: Secondary Header is updated. Jul 10 23:33:30.711680 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:33:30.714273 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:33:31.720903 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:33:31.720956 disk-uuid[598]: The operation has completed successfully. Jul 10 23:33:31.778941 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 23:33:31.779039 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 23:33:31.813804 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 23:33:31.832429 sh[612]: Success Jul 10 23:33:31.844009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 23:33:31.844055 kernel: device-mapper: uevent: version 1.0.3 Jul 10 23:33:31.847656 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 23:33:31.855652 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 10 23:33:31.883459 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 23:33:31.891154 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 23:33:31.893425 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 23:33:31.902611 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 23:33:31.903206 kernel: BTRFS: device fsid 1d7bf05b-5ff9-431d-b4bb-8cc553220034 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (624) Jul 10 23:33:31.903223 kernel: BTRFS info (device dm-0): first mount of filesystem 1d7bf05b-5ff9-431d-b4bb-8cc553220034 Jul 10 23:33:31.904841 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:33:31.904864 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 23:33:31.908157 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 23:33:31.909151 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 23:33:31.910324 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 23:33:31.911078 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 23:33:31.912420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 23:33:31.932668 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (656) Jul 10 23:33:31.934314 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:33:31.934355 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:33:31.934366 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:33:31.940662 kernel: BTRFS info (device vda6): last unmount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:33:31.942804 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 23:33:31.944794 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 23:33:32.020805 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:33:32.023900 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:33:32.067782 systemd-networkd[808]: lo: Link UP Jul 10 23:33:32.067795 systemd-networkd[808]: lo: Gained carrier Jul 10 23:33:32.068562 systemd-networkd[808]: Enumeration completed Jul 10 23:33:32.069179 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:33:32.069182 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:33:32.069750 systemd-networkd[808]: eth0: Link UP Jul 10 23:33:32.069753 systemd-networkd[808]: eth0: Gained carrier Jul 10 23:33:32.069760 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:33:32.071324 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:33:32.072210 systemd[1]: Reached target network.target - Network. Jul 10 23:33:32.083767 ignition[696]: Ignition 2.21.0 Jul 10 23:33:32.083779 ignition[696]: Stage: fetch-offline Jul 10 23:33:32.083825 ignition[696]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:33:32.083834 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:33:32.084174 ignition[696]: parsed url from cmdline: "" Jul 10 23:33:32.084177 ignition[696]: no config URL provided Jul 10 23:33:32.084182 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:33:32.084188 ignition[696]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:33:32.084207 ignition[696]: op(1): [started] loading QEMU firmware config module Jul 10 23:33:32.084211 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 23:33:32.091132 systemd-networkd[808]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 23:33:32.092003 ignition[696]: op(1): [finished] loading QEMU firmware config module Jul 10 23:33:32.130743 ignition[696]: parsing config with SHA512: 45e21bb11fa925eba9683d9cb36760e3d8ff4b8582c7893c1d82f8ed844323f13ea485030747e14488a8e19f2223b45ad4fee6493e386cc368142b0a2295843b Jul 10 23:33:32.137551 unknown[696]: fetched base config from "system" Jul 10 23:33:32.137562 unknown[696]: fetched user config from "qemu" Jul 10 23:33:32.137941 ignition[696]: fetch-offline: fetch-offline passed Jul 10 23:33:32.137995 ignition[696]: Ignition finished successfully Jul 10 23:33:32.141663 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:33:32.142803 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 23:33:32.143543 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 23:33:32.174406 ignition[816]: Ignition 2.21.0 Jul 10 23:33:32.174427 ignition[816]: Stage: kargs Jul 10 23:33:32.174621 ignition[816]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:33:32.174651 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:33:32.176554 ignition[816]: kargs: kargs passed Jul 10 23:33:32.176715 ignition[816]: Ignition finished successfully Jul 10 23:33:32.178928 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 23:33:32.181138 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 23:33:32.213081 ignition[824]: Ignition 2.21.0 Jul 10 23:33:32.213098 ignition[824]: Stage: disks Jul 10 23:33:32.213229 ignition[824]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:33:32.213238 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:33:32.214009 ignition[824]: disks: disks passed Jul 10 23:33:32.215962 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 23:33:32.214053 ignition[824]: Ignition finished successfully Jul 10 23:33:32.217258 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 23:33:32.218473 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 23:33:32.220112 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:33:32.221426 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:33:32.223110 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:33:32.224729 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 23:33:32.249076 systemd-fsck[835]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 23:33:32.253692 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 23:33:32.255414 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 23:33:32.323677 kernel: EXT4-fs (vda9): mounted filesystem 5e67f91a-7210-47f1-85b9-a7aa031a1904 r/w with ordered data mode. Quota mode: none. Jul 10 23:33:32.324354 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 23:33:32.325466 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 23:33:32.330262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:33:32.332321 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 23:33:32.333166 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 23:33:32.333204 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 23:33:32.333226 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:33:32.342023 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 23:33:32.343844 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 23:33:32.347643 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (843) Jul 10 23:33:32.350005 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:33:32.350043 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:33:32.350060 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:33:32.353026 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:33:32.395560 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 23:33:32.399908 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Jul 10 23:33:32.404658 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 23:33:32.407703 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 23:33:32.492709 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 23:33:32.495746 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 23:33:32.497258 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 23:33:32.510653 kernel: BTRFS info (device vda6): last unmount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:33:32.530664 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 23:33:32.540526 ignition[957]: INFO : Ignition 2.21.0 Jul 10 23:33:32.542455 ignition[957]: INFO : Stage: mount Jul 10 23:33:32.542455 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:33:32.542455 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:33:32.544412 ignition[957]: INFO : mount: mount passed Jul 10 23:33:32.544412 ignition[957]: INFO : Ignition finished successfully Jul 10 23:33:32.545073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 23:33:32.547722 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 23:33:32.902157 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 23:33:32.903611 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:33:32.920648 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (969) Jul 10 23:33:32.922199 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:33:32.922225 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:33:32.922235 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:33:32.926777 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:33:32.952769 ignition[986]: INFO : Ignition 2.21.0 Jul 10 23:33:32.952769 ignition[986]: INFO : Stage: files Jul 10 23:33:32.955185 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:33:32.955185 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:33:32.957290 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Jul 10 23:33:32.957290 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 23:33:32.957290 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 23:33:32.961210 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 23:33:32.961210 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 23:33:32.961210 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 23:33:32.961210 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 23:33:32.961210 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 10 23:33:32.958487 unknown[986]: wrote ssh authorized keys file for user: core Jul 10 23:33:33.085396 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 23:33:33.229262 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 23:33:33.229262 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:33:33.233259 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:33:33.254422 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 10 23:33:33.470756 systemd-networkd[808]: eth0: Gained IPv6LL Jul 10 23:33:33.714520 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 23:33:34.805132 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:33:34.805132 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 23:33:34.808081 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:33:34.812271 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:33:34.812271 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 23:33:34.812271 ignition[986]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 23:33:34.815251 ignition[986]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 23:33:34.815251 ignition[986]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 23:33:34.815251 ignition[986]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 23:33:34.815251 ignition[986]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 23:33:34.830158 ignition[986]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 23:33:34.834102 ignition[986]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 23:33:34.836275 ignition[986]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 23:33:34.836275 ignition[986]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 10 23:33:34.836275 ignition[986]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 23:33:34.836275 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:33:34.836275 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:33:34.836275 ignition[986]: INFO : files: files passed Jul 10 23:33:34.836275 ignition[986]: INFO : Ignition finished successfully Jul 10 23:33:34.837178 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 23:33:34.839521 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 23:33:34.842750 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 23:33:34.861407 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 23:33:34.862162 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 23:33:34.862532 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 23:33:34.866698 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:33:34.867929 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:33:34.867929 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:33:34.870378 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:33:34.871375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 23:33:34.875347 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 23:33:34.900233 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 23:33:34.900346 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 23:33:34.904021 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 23:33:34.905775 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 23:33:34.907504 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 23:33:34.908170 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 23:33:34.935855 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:33:34.941330 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 23:33:34.965280 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:33:34.967518 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:33:34.969802 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 23:33:34.971545 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 23:33:34.971689 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:33:34.974115 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 23:33:34.975162 systemd[1]: Stopped target basic.target - Basic System. Jul 10 23:33:34.976527 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 23:33:34.978010 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:33:34.979646 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 23:33:34.981314 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 23:33:34.982934 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 23:33:34.984468 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:33:34.986142 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 23:33:34.987754 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 23:33:34.989393 systemd[1]: Stopped target swap.target - Swaps. Jul 10 23:33:34.990666 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 23:33:34.990778 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:33:34.992675 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:33:34.994346 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:33:34.995985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 23:33:34.996090 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:33:34.997790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 23:33:34.997903 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 23:33:35.000237 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 23:33:35.000357 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:33:35.001984 systemd[1]: Stopped target paths.target - Path Units. Jul 10 23:33:35.003278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 23:33:35.003388 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:33:35.005056 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 23:33:35.006523 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 23:33:35.007907 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 23:33:35.007985 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:33:35.009423 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 23:33:35.009501 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:33:35.011350 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 23:33:35.011465 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:33:35.012950 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 23:33:35.013055 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 23:33:35.015116 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 23:33:35.024372 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 23:33:35.025599 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 23:33:35.025730 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:33:35.027426 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 23:33:35.027527 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:33:35.036264 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 23:33:35.044337 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 23:33:35.053191 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 23:33:35.059433 ignition[1041]: INFO : Ignition 2.21.0 Jul 10 23:33:35.059433 ignition[1041]: INFO : Stage: umount Jul 10 23:33:35.061825 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:33:35.061825 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:33:35.061825 ignition[1041]: INFO : umount: umount passed Jul 10 23:33:35.061825 ignition[1041]: INFO : Ignition finished successfully Jul 10 23:33:35.062944 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 23:33:35.063038 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 23:33:35.065197 systemd[1]: Stopped target network.target - Network. Jul 10 23:33:35.066035 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 23:33:35.066091 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 23:33:35.067348 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 23:33:35.067389 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 23:33:35.068611 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 23:33:35.068669 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 23:33:35.069917 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 23:33:35.069954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 23:33:35.071331 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 23:33:35.072472 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 23:33:35.078311 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 23:33:35.078416 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 23:33:35.081327 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 23:33:35.081604 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 23:33:35.081652 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:33:35.085942 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:33:35.086147 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 23:33:35.086235 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 23:33:35.089002 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 23:33:35.089321 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 23:33:35.090173 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 23:33:35.090211 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:33:35.092414 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 23:33:35.093252 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 23:33:35.093297 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:33:35.094789 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:33:35.094825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:33:35.096514 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 23:33:35.096552 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 23:33:35.098395 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:33:35.103176 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 23:33:35.112372 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 23:33:35.113104 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 23:33:35.114737 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 23:33:35.114825 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 23:33:35.117891 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 23:33:35.118002 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:33:35.120075 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 23:33:35.120125 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 23:33:35.121449 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 23:33:35.121482 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:33:35.122257 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 23:33:35.122299 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:33:35.124390 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 23:33:35.124435 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 23:33:35.126314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:33:35.126354 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:33:35.128481 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 23:33:35.128526 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 23:33:35.130770 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 23:33:35.132055 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 23:33:35.132109 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:33:35.134587 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 23:33:35.134629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:33:35.137258 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 23:33:35.137299 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:33:35.139948 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 23:33:35.139988 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:33:35.141804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:33:35.141849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:33:35.156951 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 23:33:35.157043 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 23:33:35.158948 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 23:33:35.161161 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 23:33:35.174867 systemd[1]: Switching root. Jul 10 23:33:35.203191 systemd-journald[245]: Journal stopped Jul 10 23:33:36.004849 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jul 10 23:33:36.004905 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 23:33:36.004925 kernel: SELinux: policy capability open_perms=1 Jul 10 23:33:36.004935 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 23:33:36.004944 kernel: SELinux: policy capability always_check_network=0 Jul 10 23:33:36.004953 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 23:33:36.004969 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 23:33:36.004978 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 23:33:36.004989 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 23:33:36.005013 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 23:33:36.005023 kernel: audit: type=1403 audit(1752190415.367:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 23:33:36.005037 systemd[1]: Successfully loaded SELinux policy in 45.248ms. Jul 10 23:33:36.005057 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.655ms. Jul 10 23:33:36.005068 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:33:36.005080 systemd[1]: Detected virtualization kvm. Jul 10 23:33:36.005089 systemd[1]: Detected architecture arm64. Jul 10 23:33:36.005100 systemd[1]: Detected first boot. Jul 10 23:33:36.005117 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:33:36.005127 zram_generator::config[1090]: No configuration found. Jul 10 23:33:36.005138 kernel: NET: Registered PF_VSOCK protocol family Jul 10 23:33:36.005148 systemd[1]: Populated /etc with preset unit settings. Jul 10 23:33:36.005159 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 23:33:36.005169 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 23:33:36.005179 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 23:33:36.005189 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 23:33:36.005201 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 23:33:36.005211 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 23:33:36.005221 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 23:33:36.005231 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 23:33:36.005241 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 23:33:36.005251 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 23:33:36.005262 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 23:33:36.005271 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 23:33:36.005284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:33:36.005294 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:33:36.005304 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 23:33:36.005314 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 23:33:36.005323 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 23:33:36.005334 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:33:36.005344 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 23:33:36.005354 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:33:36.005366 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:33:36.005377 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 23:33:36.005387 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 23:33:36.005397 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 23:33:36.005407 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 23:33:36.005417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:33:36.005427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:33:36.005438 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:33:36.005448 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:33:36.005460 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 23:33:36.005470 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 23:33:36.005480 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 23:33:36.005490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:33:36.005501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:33:36.005511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:33:36.005522 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 23:33:36.005532 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 23:33:36.005542 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 23:33:36.005553 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 23:33:36.005572 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 23:33:36.005584 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 23:33:36.005594 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 23:33:36.005605 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 23:33:36.005616 systemd[1]: Reached target machines.target - Containers. Jul 10 23:33:36.005626 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 23:33:36.005678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:33:36.005692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:33:36.005704 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 23:33:36.005715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:33:36.005724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:33:36.005734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:33:36.005744 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 23:33:36.005754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:33:36.005765 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 23:33:36.005775 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 23:33:36.005787 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 23:33:36.005796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 23:33:36.005806 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 23:33:36.005815 kernel: fuse: init (API version 7.41) Jul 10 23:33:36.005825 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:33:36.005835 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:33:36.005845 kernel: loop: module loaded Jul 10 23:33:36.005854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:33:36.005864 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:33:36.005875 kernel: ACPI: bus type drm_connector registered Jul 10 23:33:36.005885 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 23:33:36.005895 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 23:33:36.005905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:33:36.005915 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 23:33:36.005926 systemd[1]: Stopped verity-setup.service. Jul 10 23:33:36.005936 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 23:33:36.005946 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 23:33:36.005980 systemd-journald[1162]: Collecting audit messages is disabled. Jul 10 23:33:36.006003 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 23:33:36.006014 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 23:33:36.006026 systemd-journald[1162]: Journal started Jul 10 23:33:36.006049 systemd-journald[1162]: Runtime Journal (/run/log/journal/d1660ced5fc643b0954cb067c0914cb4) is 6M, max 48.5M, 42.4M free. Jul 10 23:33:35.786367 systemd[1]: Queued start job for default target multi-user.target. Jul 10 23:33:35.806738 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 23:33:35.807148 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 23:33:36.008655 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:33:36.009225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 23:33:36.010352 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 23:33:36.012685 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 23:33:36.014020 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:33:36.015242 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 23:33:36.015432 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 23:33:36.016734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:33:36.016904 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:33:36.018088 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:33:36.018249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:33:36.019522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:33:36.019760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:33:36.021168 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 23:33:36.021345 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 23:33:36.022707 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:33:36.022868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:33:36.024055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:33:36.025275 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:33:36.026598 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 23:33:36.027917 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 23:33:36.040482 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:33:36.043471 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 23:33:36.045897 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 23:33:36.047153 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 23:33:36.047191 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:33:36.049236 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 23:33:36.052730 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 23:33:36.053726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:33:36.055138 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 23:33:36.057159 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 23:33:36.058256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:33:36.059786 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 23:33:36.060681 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:33:36.063501 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:33:36.065252 systemd-journald[1162]: Time spent on flushing to /var/log/journal/d1660ced5fc643b0954cb067c0914cb4 is 31.889ms for 881 entries. Jul 10 23:33:36.065252 systemd-journald[1162]: System Journal (/var/log/journal/d1660ced5fc643b0954cb067c0914cb4) is 8M, max 195.6M, 187.6M free. Jul 10 23:33:36.115874 systemd-journald[1162]: Received client request to flush runtime journal. Jul 10 23:33:36.115926 kernel: loop0: detected capacity change from 0 to 138376 Jul 10 23:33:36.115944 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 23:33:36.065730 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 23:33:36.069498 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:33:36.073679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:33:36.077171 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 23:33:36.078521 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 23:33:36.091334 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 23:33:36.092451 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 23:33:36.094973 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 23:33:36.105131 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jul 10 23:33:36.105163 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jul 10 23:33:36.107492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:33:36.112301 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:33:36.115918 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 23:33:36.118462 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 23:33:36.123051 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 23:33:36.128817 kernel: loop1: detected capacity change from 0 to 107312 Jul 10 23:33:36.146081 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 23:33:36.150140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:33:36.152069 kernel: loop2: detected capacity change from 0 to 211168 Jul 10 23:33:36.172980 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jul 10 23:33:36.172998 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jul 10 23:33:36.177242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:33:36.185925 kernel: loop3: detected capacity change from 0 to 138376 Jul 10 23:33:36.193195 kernel: loop4: detected capacity change from 0 to 107312 Jul 10 23:33:36.198820 kernel: loop5: detected capacity change from 0 to 211168 Jul 10 23:33:36.202118 (sd-merge)[1231]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 23:33:36.202503 (sd-merge)[1231]: Merged extensions into '/usr'. Jul 10 23:33:36.206116 systemd[1]: Reload requested from client PID 1206 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 23:33:36.206134 systemd[1]: Reloading... Jul 10 23:33:36.265812 zram_generator::config[1261]: No configuration found. Jul 10 23:33:36.330898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:33:36.337514 ldconfig[1201]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 23:33:36.393043 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 23:33:36.393366 systemd[1]: Reloading finished in 186 ms. Jul 10 23:33:36.426311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 23:33:36.427842 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 23:33:36.442006 systemd[1]: Starting ensure-sysext.service... Jul 10 23:33:36.443736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:33:36.454875 systemd[1]: Reload requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... Jul 10 23:33:36.454892 systemd[1]: Reloading... Jul 10 23:33:36.465674 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 23:33:36.465711 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 23:33:36.465948 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 23:33:36.466133 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 23:33:36.466764 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 23:33:36.466971 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jul 10 23:33:36.467014 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jul 10 23:33:36.469266 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:33:36.469281 systemd-tmpfiles[1292]: Skipping /boot Jul 10 23:33:36.478531 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:33:36.478553 systemd-tmpfiles[1292]: Skipping /boot Jul 10 23:33:36.511657 zram_generator::config[1319]: No configuration found. Jul 10 23:33:36.587606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:33:36.650879 systemd[1]: Reloading finished in 195 ms. Jul 10 23:33:36.672276 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 23:33:36.673622 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:33:36.691920 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:33:36.694156 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 23:33:36.696183 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 23:33:36.700066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:33:36.704080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:33:36.706597 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 23:33:36.713833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 23:33:36.716169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:33:36.726280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:33:36.729893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:33:36.732498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:33:36.733653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:33:36.733780 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:33:36.734772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 23:33:36.744493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:33:36.744743 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:33:36.746404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:33:36.746567 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:33:36.748479 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:33:36.748745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:33:36.754186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:33:36.756491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:33:36.761028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:33:36.764481 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Jul 10 23:33:36.765082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:33:36.766069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:33:36.766253 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:33:36.768933 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 23:33:36.777707 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 23:33:36.782746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:33:36.782946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:33:36.784969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:33:36.785162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:33:36.792085 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 23:33:36.793730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:33:36.793896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:33:36.795362 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 23:33:36.796851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:33:36.804878 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 23:33:36.813702 systemd[1]: Finished ensure-sysext.service. Jul 10 23:33:36.815442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:33:36.817083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:33:36.820306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:33:36.825000 augenrules[1432]: No rules Jul 10 23:33:36.825813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:33:36.829906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:33:36.831902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:33:36.831952 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:33:36.834912 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:33:36.841885 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 23:33:36.843817 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 23:33:36.845597 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:33:36.845853 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:33:36.848021 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:33:36.848189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:33:36.851354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:33:36.851552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:33:36.856458 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:33:36.856674 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:33:36.858307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:33:36.858780 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:33:36.867208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:33:36.867265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:33:36.932598 systemd-resolved[1358]: Positive Trust Anchors: Jul 10 23:33:36.932616 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:33:36.932657 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:33:36.933884 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 23:33:36.940870 systemd-resolved[1358]: Defaulting to hostname 'linux'. Jul 10 23:33:36.942324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:33:36.943331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:33:36.961344 systemd-networkd[1440]: lo: Link UP Jul 10 23:33:36.961352 systemd-networkd[1440]: lo: Gained carrier Jul 10 23:33:36.963847 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 23:33:36.964022 systemd-networkd[1440]: Enumeration completed Jul 10 23:33:36.965183 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:33:36.966215 systemd[1]: Reached target network.target - Network. Jul 10 23:33:36.967204 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:33:36.968107 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 23:33:36.968980 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:33:36.968991 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:33:36.969216 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 23:33:36.970488 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 23:33:36.971443 systemd-networkd[1440]: eth0: Link UP Jul 10 23:33:36.971571 systemd-networkd[1440]: eth0: Gained carrier Jul 10 23:33:36.971573 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 23:33:36.971592 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:33:36.971603 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:33:36.972320 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 23:33:36.973483 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 23:33:36.974537 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 23:33:36.975703 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:33:36.978107 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 23:33:36.980404 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 23:33:36.984199 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 23:33:36.985354 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 23:33:36.986438 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 23:33:36.986725 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 23:33:36.987924 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Jul 10 23:33:36.988520 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 23:33:36.988588 systemd-timesyncd[1441]: Initial clock synchronization to Thu 2025-07-10 23:33:37.227404 UTC. Jul 10 23:33:36.993905 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 23:33:36.995962 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 23:33:36.998708 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 23:33:37.001025 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 23:33:37.002893 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 23:33:37.005837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 23:33:37.007552 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:33:37.008365 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:33:37.009269 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:33:37.009299 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:33:37.012011 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 23:33:37.016900 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 23:33:37.019744 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 23:33:37.021813 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 23:33:37.031863 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 23:33:37.033755 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 23:33:37.035007 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 23:33:37.038072 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 23:33:37.044963 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 23:33:37.047151 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 23:33:37.048307 jq[1465]: false Jul 10 23:33:37.050936 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 23:33:37.055158 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 23:33:37.056883 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 23:33:37.072031 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 23:33:37.072870 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 23:33:37.076862 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 23:33:37.078727 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 23:33:37.084706 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 23:33:37.085971 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 23:33:37.086176 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 23:33:37.086421 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 23:33:37.086579 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 23:33:37.086851 jq[1494]: true Jul 10 23:33:37.090081 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 23:33:37.090287 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 23:33:37.111814 extend-filesystems[1466]: Found /dev/vda6 Jul 10 23:33:37.116028 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 23:33:37.120340 tar[1500]: linux-arm64/LICENSE Jul 10 23:33:37.120619 tar[1500]: linux-arm64/helm Jul 10 23:33:37.125714 jq[1501]: true Jul 10 23:33:37.131425 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 23:33:37.140670 extend-filesystems[1466]: Found /dev/vda9 Jul 10 23:33:37.148553 extend-filesystems[1466]: Checking size of /dev/vda9 Jul 10 23:33:37.159143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:33:37.173152 extend-filesystems[1466]: Resized partition /dev/vda9 Jul 10 23:33:37.180701 extend-filesystems[1529]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 23:33:37.185790 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 23:33:37.190278 dbus-daemon[1463]: [system] SELinux support is enabled Jul 10 23:33:37.190464 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 23:33:37.194806 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 23:33:37.198829 update_engine[1492]: I20250710 23:33:37.195956 1492 main.cc:92] Flatcar Update Engine starting Jul 10 23:33:37.194844 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 23:33:37.196276 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 23:33:37.196293 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 23:33:37.204326 update_engine[1492]: I20250710 23:33:37.204269 1492 update_check_scheduler.cc:74] Next update check in 10m9s Jul 10 23:33:37.206766 systemd[1]: Started update-engine.service - Update Engine. Jul 10 23:33:37.210728 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 23:33:37.232680 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 23:33:37.261599 extend-filesystems[1529]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 23:33:37.261599 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 23:33:37.261599 extend-filesystems[1529]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 23:33:37.275784 extend-filesystems[1466]: Resized filesystem in /dev/vda9 Jul 10 23:33:37.268429 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 23:33:37.278753 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:33:37.270750 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 23:33:37.328331 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 23:33:37.332047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:33:37.340610 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 23:33:37.359003 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 23:33:37.359460 systemd-logind[1478]: New seat seat0. Jul 10 23:33:37.361351 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 23:33:37.361542 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 23:33:37.430074 containerd[1506]: time="2025-07-10T23:33:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 23:33:37.430938 containerd[1506]: time="2025-07-10T23:33:37.430900008Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 23:33:37.441904 containerd[1506]: time="2025-07-10T23:33:37.441854130Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.546µs" Jul 10 23:33:37.441904 containerd[1506]: time="2025-07-10T23:33:37.441900145Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 23:33:37.442007 containerd[1506]: time="2025-07-10T23:33:37.441920248Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 23:33:37.442116 containerd[1506]: time="2025-07-10T23:33:37.442093390Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 23:33:37.442146 containerd[1506]: time="2025-07-10T23:33:37.442116912Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 23:33:37.442168 containerd[1506]: time="2025-07-10T23:33:37.442143854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442193288Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442209972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442474854Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442492239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442504679Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442513372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442585421Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442897596Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442927462Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.442938420Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 23:33:37.443699 containerd[1506]: time="2025-07-10T23:33:37.443472182Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 23:33:37.444090 containerd[1506]: time="2025-07-10T23:33:37.444055995Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 23:33:37.444218 containerd[1506]: time="2025-07-10T23:33:37.444199765Z" level=info msg="metadata content store policy set" policy=shared Jul 10 23:33:37.450994 containerd[1506]: time="2025-07-10T23:33:37.450944062Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 23:33:37.451067 containerd[1506]: time="2025-07-10T23:33:37.451020355Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 23:33:37.451067 containerd[1506]: time="2025-07-10T23:33:37.451040705Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 23:33:37.451067 containerd[1506]: time="2025-07-10T23:33:37.451054918Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 23:33:37.451171 containerd[1506]: time="2025-07-10T23:33:37.451068594Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 23:33:37.451171 containerd[1506]: time="2025-07-10T23:33:37.451080829Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 23:33:37.451171 containerd[1506]: time="2025-07-10T23:33:37.451093146Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 23:33:37.451171 containerd[1506]: time="2025-07-10T23:33:37.451114527Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 23:33:37.451171 containerd[1506]: time="2025-07-10T23:33:37.451136813Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 23:33:37.451171 containerd[1506]: time="2025-07-10T23:33:37.451154568Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 23:33:37.451171 containerd[1506]: time="2025-07-10T23:33:37.451164578Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 23:33:37.451288 containerd[1506]: time="2025-07-10T23:33:37.451179244Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 23:33:37.451357 containerd[1506]: time="2025-07-10T23:33:37.451334878Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 23:33:37.451385 containerd[1506]: time="2025-07-10T23:33:37.451363385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 23:33:37.451385 containerd[1506]: time="2025-07-10T23:33:37.451379368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 23:33:37.451418 containerd[1506]: time="2025-07-10T23:33:37.451391232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 23:33:37.451418 containerd[1506]: time="2025-07-10T23:33:37.451402726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 23:33:37.451418 containerd[1506]: time="2025-07-10T23:33:37.451415208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 23:33:37.451469 containerd[1506]: time="2025-07-10T23:33:37.451426413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 23:33:37.451469 containerd[1506]: time="2025-07-10T23:33:37.451436505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 23:33:37.451469 containerd[1506]: time="2025-07-10T23:33:37.451447546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 23:33:37.451469 containerd[1506]: time="2025-07-10T23:33:37.451460357Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 23:33:37.451543 containerd[1506]: time="2025-07-10T23:33:37.451470697Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 23:33:37.451710 containerd[1506]: time="2025-07-10T23:33:37.451690636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 23:33:37.451744 containerd[1506]: time="2025-07-10T23:33:37.451715436Z" level=info msg="Start snapshots syncer" Jul 10 23:33:37.451763 containerd[1506]: time="2025-07-10T23:33:37.451748062Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 23:33:37.452008 containerd[1506]: time="2025-07-10T23:33:37.451971668Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 23:33:37.452101 containerd[1506]: time="2025-07-10T23:33:37.452028022Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 23:33:37.452121 containerd[1506]: time="2025-07-10T23:33:37.452107734Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 23:33:37.452245 containerd[1506]: time="2025-07-10T23:33:37.452222833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 23:33:37.452271 containerd[1506]: time="2025-07-10T23:33:37.452253152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 23:33:37.452271 containerd[1506]: time="2025-07-10T23:33:37.452264893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 23:33:37.452310 containerd[1506]: time="2025-07-10T23:33:37.452277498Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 23:33:37.452310 containerd[1506]: time="2025-07-10T23:33:37.452289857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 23:33:37.452310 containerd[1506]: time="2025-07-10T23:33:37.452301556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 23:33:37.452358 containerd[1506]: time="2025-07-10T23:33:37.452312184Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 23:33:37.452358 containerd[1506]: time="2025-07-10T23:33:37.452338549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 23:33:37.452358 containerd[1506]: time="2025-07-10T23:33:37.452350537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 23:33:37.452408 containerd[1506]: time="2025-07-10T23:33:37.452363184Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 23:33:37.452408 containerd[1506]: time="2025-07-10T23:33:37.452401165Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 23:33:37.452442 containerd[1506]: time="2025-07-10T23:33:37.452417231Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 23:33:37.452442 containerd[1506]: time="2025-07-10T23:33:37.452426006Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 23:33:37.452442 containerd[1506]: time="2025-07-10T23:33:37.452436881Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 23:33:37.452492 containerd[1506]: time="2025-07-10T23:33:37.452444997Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 23:33:37.452492 containerd[1506]: time="2025-07-10T23:33:37.452455625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 23:33:37.452492 containerd[1506]: time="2025-07-10T23:33:37.452467159Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 23:33:37.452571 containerd[1506]: time="2025-07-10T23:33:37.452555110Z" level=info msg="runtime interface created" Jul 10 23:33:37.452571 containerd[1506]: time="2025-07-10T23:33:37.452565244Z" level=info msg="created NRI interface" Jul 10 23:33:37.452613 containerd[1506]: time="2025-07-10T23:33:37.452580486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 23:33:37.452613 containerd[1506]: time="2025-07-10T23:33:37.452594534Z" level=info msg="Connect containerd service" Jul 10 23:33:37.452647 containerd[1506]: time="2025-07-10T23:33:37.452622423Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 23:33:37.453365 containerd[1506]: time="2025-07-10T23:33:37.453333652Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:33:37.559254 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 23:33:37.562761 containerd[1506]: time="2025-07-10T23:33:37.562697737Z" level=info msg="Start subscribing containerd event" Jul 10 23:33:37.562850 containerd[1506]: time="2025-07-10T23:33:37.562777408Z" level=info msg="Start recovering state" Jul 10 23:33:37.562882 containerd[1506]: time="2025-07-10T23:33:37.562874710Z" level=info msg="Start event monitor" Jul 10 23:33:37.562926 containerd[1506]: time="2025-07-10T23:33:37.562899056Z" level=info msg="Start cni network conf syncer for default" Jul 10 23:33:37.562926 containerd[1506]: time="2025-07-10T23:33:37.562910920Z" level=info msg="Start streaming server" Jul 10 23:33:37.562926 containerd[1506]: time="2025-07-10T23:33:37.562920107Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 23:33:37.562976 containerd[1506]: time="2025-07-10T23:33:37.562927934Z" level=info msg="runtime interface starting up..." Jul 10 23:33:37.562976 containerd[1506]: time="2025-07-10T23:33:37.562934072Z" level=info msg="starting plugins..." Jul 10 23:33:37.562976 containerd[1506]: time="2025-07-10T23:33:37.562948902Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 23:33:37.563184 containerd[1506]: time="2025-07-10T23:33:37.563148532Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 23:33:37.563374 containerd[1506]: time="2025-07-10T23:33:37.563352076Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 23:33:37.564703 containerd[1506]: time="2025-07-10T23:33:37.563428122Z" level=info msg="containerd successfully booted in 0.133737s" Jul 10 23:33:37.563524 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 23:33:37.584426 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 23:33:37.589927 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 23:33:37.610355 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 23:33:37.610616 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 23:33:37.613471 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 23:33:37.625184 tar[1500]: linux-arm64/README.md Jul 10 23:33:37.638313 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 23:33:37.641970 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 23:33:37.644811 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 23:33:37.647050 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 23:33:37.648254 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 23:33:38.014971 systemd-networkd[1440]: eth0: Gained IPv6LL Jul 10 23:33:38.017544 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 23:33:38.019070 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 23:33:38.021410 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 23:33:38.040091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:33:38.042263 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 23:33:38.057573 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 23:33:38.058054 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 23:33:38.059908 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 23:33:38.072336 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 23:33:38.666002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:33:38.668364 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 23:33:38.676617 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:33:38.676917 systemd[1]: Startup finished in 2.101s (kernel) + 5.731s (initrd) + 3.356s (userspace) = 11.189s. Jul 10 23:33:39.188366 kubelet[1619]: E0710 23:33:39.188224 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:33:39.191161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:33:39.191308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:33:39.192826 systemd[1]: kubelet.service: Consumed 866ms CPU time, 259.9M memory peak. Jul 10 23:33:43.025667 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 23:33:43.027191 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:33184.service - OpenSSH per-connection server daemon (10.0.0.1:33184). Jul 10 23:33:43.167989 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 33184 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:33:43.169822 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:33:43.175593 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 23:33:43.176514 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 23:33:43.181766 systemd-logind[1478]: New session 1 of user core. Jul 10 23:33:43.194548 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 23:33:43.196885 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 23:33:43.210782 (systemd)[1638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 23:33:43.215932 systemd-logind[1478]: New session c1 of user core. Jul 10 23:33:43.335193 systemd[1638]: Queued start job for default target default.target. Jul 10 23:33:43.346614 systemd[1638]: Created slice app.slice - User Application Slice. Jul 10 23:33:43.346666 systemd[1638]: Reached target paths.target - Paths. Jul 10 23:33:43.346704 systemd[1638]: Reached target timers.target - Timers. Jul 10 23:33:43.347959 systemd[1638]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 23:33:43.357924 systemd[1638]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 23:33:43.358092 systemd[1638]: Reached target sockets.target - Sockets. Jul 10 23:33:43.358190 systemd[1638]: Reached target basic.target - Basic System. Jul 10 23:33:43.358331 systemd[1638]: Reached target default.target - Main User Target. Jul 10 23:33:43.358384 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 23:33:43.358483 systemd[1638]: Startup finished in 136ms. Jul 10 23:33:43.359677 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 23:33:43.417813 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:33200.service - OpenSSH per-connection server daemon (10.0.0.1:33200). Jul 10 23:33:43.467975 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 33200 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:33:43.469282 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:33:43.473430 systemd-logind[1478]: New session 2 of user core. Jul 10 23:33:43.491824 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 23:33:43.543337 sshd[1651]: Connection closed by 10.0.0.1 port 33200 Jul 10 23:33:43.543783 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jul 10 23:33:43.552745 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:33200.service: Deactivated successfully. Jul 10 23:33:43.555037 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 23:33:43.556633 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Jul 10 23:33:43.557959 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:33210.service - OpenSSH per-connection server daemon (10.0.0.1:33210). Jul 10 23:33:43.558803 systemd-logind[1478]: Removed session 2. Jul 10 23:33:43.614658 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 33210 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:33:43.616133 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:33:43.620986 systemd-logind[1478]: New session 3 of user core. Jul 10 23:33:43.626806 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 23:33:43.675160 sshd[1659]: Connection closed by 10.0.0.1 port 33210 Jul 10 23:33:43.675476 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Jul 10 23:33:43.687476 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:33210.service: Deactivated successfully. Jul 10 23:33:43.690090 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 23:33:43.690784 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Jul 10 23:33:43.693451 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:33214.service - OpenSSH per-connection server daemon (10.0.0.1:33214). Jul 10 23:33:43.694416 systemd-logind[1478]: Removed session 3. Jul 10 23:33:43.745036 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 33214 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:33:43.746308 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:33:43.750730 systemd-logind[1478]: New session 4 of user core. Jul 10 23:33:43.766819 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 23:33:43.822101 sshd[1667]: Connection closed by 10.0.0.1 port 33214 Jul 10 23:33:43.822535 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Jul 10 23:33:43.831513 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:33214.service: Deactivated successfully. Jul 10 23:33:43.833186 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 23:33:43.835175 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Jul 10 23:33:43.837729 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:33222.service - OpenSSH per-connection server daemon (10.0.0.1:33222). Jul 10 23:33:43.838488 systemd-logind[1478]: Removed session 4. Jul 10 23:33:43.908378 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 33222 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:33:43.909577 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:33:43.913702 systemd-logind[1478]: New session 5 of user core. Jul 10 23:33:43.924801 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 23:33:43.979789 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 23:33:43.980043 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:33:44.005292 sudo[1676]: pam_unix(sudo:session): session closed for user root Jul 10 23:33:44.006776 sshd[1675]: Connection closed by 10.0.0.1 port 33222 Jul 10 23:33:44.007253 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jul 10 23:33:44.023668 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:33222.service: Deactivated successfully. Jul 10 23:33:44.025074 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 23:33:44.025745 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Jul 10 23:33:44.028100 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:33230.service - OpenSSH per-connection server daemon (10.0.0.1:33230). Jul 10 23:33:44.028555 systemd-logind[1478]: Removed session 5. Jul 10 23:33:44.083221 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 33230 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:33:44.084457 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:33:44.088380 systemd-logind[1478]: New session 6 of user core. Jul 10 23:33:44.098827 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 23:33:44.149822 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 23:33:44.150078 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:33:44.222482 sudo[1686]: pam_unix(sudo:session): session closed for user root Jul 10 23:33:44.227302 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 23:33:44.227635 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:33:44.235621 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:33:44.288440 augenrules[1708]: No rules Jul 10 23:33:44.290040 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:33:44.290295 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:33:44.292471 sudo[1685]: pam_unix(sudo:session): session closed for user root Jul 10 23:33:44.294704 sshd[1684]: Connection closed by 10.0.0.1 port 33230 Jul 10 23:33:44.295373 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jul 10 23:33:44.305066 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:33230.service: Deactivated successfully. Jul 10 23:33:44.306542 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 23:33:44.307182 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Jul 10 23:33:44.309547 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:33236.service - OpenSSH per-connection server daemon (10.0.0.1:33236). Jul 10 23:33:44.310003 systemd-logind[1478]: Removed session 6. Jul 10 23:33:44.355329 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 33236 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:33:44.356472 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:33:44.360704 systemd-logind[1478]: New session 7 of user core. Jul 10 23:33:44.376846 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 23:33:44.428629 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 23:33:44.428925 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:33:44.808151 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 23:33:44.824015 (dockerd)[1739]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 23:33:45.105969 dockerd[1739]: time="2025-07-10T23:33:45.105842114Z" level=info msg="Starting up" Jul 10 23:33:45.107715 dockerd[1739]: time="2025-07-10T23:33:45.107680148Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 23:33:45.142238 systemd[1]: var-lib-docker-metacopy\x2dcheck2743431897-merged.mount: Deactivated successfully. Jul 10 23:33:45.152583 dockerd[1739]: time="2025-07-10T23:33:45.152408820Z" level=info msg="Loading containers: start." Jul 10 23:33:45.161663 kernel: Initializing XFRM netlink socket Jul 10 23:33:45.356826 systemd-networkd[1440]: docker0: Link UP Jul 10 23:33:45.360039 dockerd[1739]: time="2025-07-10T23:33:45.359960208Z" level=info msg="Loading containers: done." Jul 10 23:33:45.375886 dockerd[1739]: time="2025-07-10T23:33:45.375824464Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 23:33:45.376019 dockerd[1739]: time="2025-07-10T23:33:45.375914863Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 23:33:45.376051 dockerd[1739]: time="2025-07-10T23:33:45.376019405Z" level=info msg="Initializing buildkit" Jul 10 23:33:45.398210 dockerd[1739]: time="2025-07-10T23:33:45.398164837Z" level=info msg="Completed buildkit initialization" Jul 10 23:33:45.404729 dockerd[1739]: time="2025-07-10T23:33:45.404675180Z" level=info msg="Daemon has completed initialization" Jul 10 23:33:45.405452 dockerd[1739]: time="2025-07-10T23:33:45.405212034Z" level=info msg="API listen on /run/docker.sock" Jul 10 23:33:45.404903 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 23:33:46.046656 containerd[1506]: time="2025-07-10T23:33:46.046603926Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 23:33:46.679020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411217166.mount: Deactivated successfully. Jul 10 23:33:47.800501 containerd[1506]: time="2025-07-10T23:33:47.800442964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:47.801032 containerd[1506]: time="2025-07-10T23:33:47.800993940Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 10 23:33:47.801933 containerd[1506]: time="2025-07-10T23:33:47.801882105Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:47.805472 containerd[1506]: time="2025-07-10T23:33:47.805418438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:47.806139 containerd[1506]: time="2025-07-10T23:33:47.806099710Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.759442401s" Jul 10 23:33:47.806139 containerd[1506]: time="2025-07-10T23:33:47.806136880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 10 23:33:47.809432 containerd[1506]: time="2025-07-10T23:33:47.809393674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 23:33:49.205139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 23:33:49.206885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:33:49.338706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:33:49.342537 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:33:49.401035 kubelet[2018]: E0710 23:33:49.400976 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:33:49.404415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:33:49.404562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:33:49.404980 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.7M memory peak. Jul 10 23:33:49.470785 containerd[1506]: time="2025-07-10T23:33:49.470661739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:49.471421 containerd[1506]: time="2025-07-10T23:33:49.471364703Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 10 23:33:49.472378 containerd[1506]: time="2025-07-10T23:33:49.472350439Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:49.474886 containerd[1506]: time="2025-07-10T23:33:49.474847452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:49.475945 containerd[1506]: time="2025-07-10T23:33:49.475909122Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.666477046s" Jul 10 23:33:49.475979 containerd[1506]: time="2025-07-10T23:33:49.475945580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 10 23:33:49.476735 containerd[1506]: time="2025-07-10T23:33:49.476702466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 23:33:50.909859 containerd[1506]: time="2025-07-10T23:33:50.909795350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:50.910913 containerd[1506]: time="2025-07-10T23:33:50.910409164Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 10 23:33:50.911006 containerd[1506]: time="2025-07-10T23:33:50.910963065Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:50.913530 containerd[1506]: time="2025-07-10T23:33:50.913493477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:50.914651 containerd[1506]: time="2025-07-10T23:33:50.914501154Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.437765574s" Jul 10 23:33:50.914651 containerd[1506]: time="2025-07-10T23:33:50.914536097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 10 23:33:50.914952 containerd[1506]: time="2025-07-10T23:33:50.914933458Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 23:33:51.965341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131256028.mount: Deactivated successfully. Jul 10 23:33:52.190342 containerd[1506]: time="2025-07-10T23:33:52.190287449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:52.191331 containerd[1506]: time="2025-07-10T23:33:52.191299030Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 10 23:33:52.192254 containerd[1506]: time="2025-07-10T23:33:52.192194826Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:52.194787 containerd[1506]: time="2025-07-10T23:33:52.194740767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:52.195230 containerd[1506]: time="2025-07-10T23:33:52.195201416Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.280240544s" Jul 10 23:33:52.195287 containerd[1506]: time="2025-07-10T23:33:52.195230412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 10 23:33:52.196178 containerd[1506]: time="2025-07-10T23:33:52.195973355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 23:33:52.861235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611896577.mount: Deactivated successfully. Jul 10 23:33:53.951374 containerd[1506]: time="2025-07-10T23:33:53.950982190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:53.951783 containerd[1506]: time="2025-07-10T23:33:53.951383561Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 10 23:33:53.952303 containerd[1506]: time="2025-07-10T23:33:53.952244025Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:53.954629 containerd[1506]: time="2025-07-10T23:33:53.954579152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:53.955662 containerd[1506]: time="2025-07-10T23:33:53.955628439Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.759623241s" Jul 10 23:33:53.955710 containerd[1506]: time="2025-07-10T23:33:53.955667175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 10 23:33:53.956102 containerd[1506]: time="2025-07-10T23:33:53.956054175Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 23:33:54.372325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350030919.mount: Deactivated successfully. Jul 10 23:33:54.378552 containerd[1506]: time="2025-07-10T23:33:54.378498207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:33:54.379459 containerd[1506]: time="2025-07-10T23:33:54.379423614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 23:33:54.381385 containerd[1506]: time="2025-07-10T23:33:54.381349338Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:33:54.384353 containerd[1506]: time="2025-07-10T23:33:54.384315223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:33:54.384790 containerd[1506]: time="2025-07-10T23:33:54.384748596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 428.669857ms" Jul 10 23:33:54.384790 containerd[1506]: time="2025-07-10T23:33:54.384787997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 23:33:54.385618 containerd[1506]: time="2025-07-10T23:33:54.385593475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 23:33:54.923439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714257779.mount: Deactivated successfully. Jul 10 23:33:57.215956 containerd[1506]: time="2025-07-10T23:33:57.215908986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:57.217173 containerd[1506]: time="2025-07-10T23:33:57.216413025Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 10 23:33:57.217698 containerd[1506]: time="2025-07-10T23:33:57.217670139Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:57.221654 containerd[1506]: time="2025-07-10T23:33:57.221132562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:33:57.222298 containerd[1506]: time="2025-07-10T23:33:57.222264177Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.836643506s" Jul 10 23:33:57.222349 containerd[1506]: time="2025-07-10T23:33:57.222298007Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 10 23:33:59.455047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 23:33:59.457225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:33:59.596007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:33:59.600788 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:33:59.643264 kubelet[2184]: E0710 23:33:59.643209 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:33:59.646091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:33:59.646366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:33:59.646957 systemd[1]: kubelet.service: Consumed 139ms CPU time, 106.9M memory peak. Jul 10 23:34:02.300123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:02.300553 systemd[1]: kubelet.service: Consumed 139ms CPU time, 106.9M memory peak. Jul 10 23:34:02.302750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:02.324296 systemd[1]: Reload requested from client PID 2199 ('systemctl') (unit session-7.scope)... Jul 10 23:34:02.324310 systemd[1]: Reloading... Jul 10 23:34:02.400698 zram_generator::config[2245]: No configuration found. Jul 10 23:34:02.495246 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:02.581425 systemd[1]: Reloading finished in 256 ms. Jul 10 23:34:02.629233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:02.631892 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:02.633273 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:34:02.634686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:02.634729 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.2M memory peak. Jul 10 23:34:02.636174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:02.768682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:02.772978 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:34:02.809785 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:34:02.809785 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:34:02.809785 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:34:02.810105 kubelet[2289]: I0710 23:34:02.809818 2289 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:34:03.143326 kubelet[2289]: I0710 23:34:03.143284 2289 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 23:34:03.143326 kubelet[2289]: I0710 23:34:03.143315 2289 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:34:03.143577 kubelet[2289]: I0710 23:34:03.143550 2289 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 23:34:03.202212 kubelet[2289]: E0710 23:34:03.202165 2289 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 23:34:03.205756 kubelet[2289]: I0710 23:34:03.205722 2289 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:34:03.213900 kubelet[2289]: I0710 23:34:03.213870 2289 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 23:34:03.217761 kubelet[2289]: I0710 23:34:03.217712 2289 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:34:03.218850 kubelet[2289]: I0710 23:34:03.218796 2289 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:34:03.219013 kubelet[2289]: I0710 23:34:03.218843 2289 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:34:03.219151 kubelet[2289]: I0710 23:34:03.219066 2289 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:34:03.219151 kubelet[2289]: I0710 23:34:03.219075 2289 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 23:34:03.219807 kubelet[2289]: I0710 23:34:03.219782 2289 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:34:03.222276 kubelet[2289]: I0710 23:34:03.222228 2289 kubelet.go:480] "Attempting to sync node with API server" Jul 10 23:34:03.222276 kubelet[2289]: I0710 23:34:03.222252 2289 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:34:03.222276 kubelet[2289]: I0710 23:34:03.222283 2289 kubelet.go:386] "Adding apiserver pod source" Jul 10 23:34:03.223649 kubelet[2289]: I0710 23:34:03.223260 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:34:03.224439 kubelet[2289]: I0710 23:34:03.224413 2289 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 23:34:03.225235 kubelet[2289]: I0710 23:34:03.225195 2289 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 23:34:03.225374 kubelet[2289]: W0710 23:34:03.225359 2289 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 23:34:03.225481 kubelet[2289]: E0710 23:34:03.225447 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 23:34:03.226657 kubelet[2289]: E0710 23:34:03.226588 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 23:34:03.228000 kubelet[2289]: I0710 23:34:03.227978 2289 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:34:03.228072 kubelet[2289]: I0710 23:34:03.228027 2289 server.go:1289] "Started kubelet" Jul 10 23:34:03.228112 kubelet[2289]: I0710 23:34:03.228080 2289 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:34:03.230518 kubelet[2289]: I0710 23:34:03.230492 2289 server.go:317] "Adding debug handlers to kubelet server" Jul 10 23:34:03.234507 kubelet[2289]: I0710 23:34:03.233404 2289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:34:03.234507 kubelet[2289]: I0710 23:34:03.233769 2289 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:34:03.236962 kubelet[2289]: I0710 23:34:03.235348 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:34:03.236962 kubelet[2289]: E0710 23:34:03.235388 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185107ecb3705b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 23:34:03.227995022 +0000 UTC m=+0.451414721,LastTimestamp:2025-07-10 23:34:03.227995022 +0000 UTC m=+0.451414721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 23:34:03.237089 kubelet[2289]: I0710 23:34:03.236977 2289 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:34:03.237927 kubelet[2289]: I0710 23:34:03.237893 2289 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:34:03.238628 kubelet[2289]: E0710 23:34:03.238593 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:03.240529 kubelet[2289]: I0710 23:34:03.240457 2289 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:34:03.240881 kubelet[2289]: E0710 23:34:03.240793 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Jul 10 23:34:03.241132 kubelet[2289]: I0710 23:34:03.241107 2289 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:34:03.241273 kubelet[2289]: E0710 23:34:03.241244 2289 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:34:03.241396 kubelet[2289]: E0710 23:34:03.241370 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 23:34:03.241436 kubelet[2289]: I0710 23:34:03.241165 2289 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:34:03.242327 kubelet[2289]: I0710 23:34:03.242308 2289 factory.go:223] Registration of the containerd container factory successfully Jul 10 23:34:03.242327 kubelet[2289]: I0710 23:34:03.242325 2289 factory.go:223] Registration of the systemd container factory successfully Jul 10 23:34:03.252549 kubelet[2289]: I0710 23:34:03.252399 2289 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:34:03.252549 kubelet[2289]: I0710 23:34:03.252418 2289 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:34:03.252549 kubelet[2289]: I0710 23:34:03.252436 2289 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:34:03.259260 kubelet[2289]: I0710 23:34:03.259197 2289 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 23:34:03.260720 kubelet[2289]: I0710 23:34:03.260673 2289 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 23:34:03.260720 kubelet[2289]: I0710 23:34:03.260707 2289 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 23:34:03.260838 kubelet[2289]: I0710 23:34:03.260728 2289 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:34:03.260838 kubelet[2289]: I0710 23:34:03.260735 2289 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 23:34:03.260838 kubelet[2289]: E0710 23:34:03.260789 2289 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:34:03.261917 kubelet[2289]: E0710 23:34:03.261886 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 23:34:03.338761 kubelet[2289]: E0710 23:34:03.338717 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:03.361165 kubelet[2289]: E0710 23:34:03.361134 2289 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 23:34:03.362323 kubelet[2289]: I0710 23:34:03.362302 2289 policy_none.go:49] "None policy: Start" Jul 10 23:34:03.362377 kubelet[2289]: I0710 23:34:03.362331 2289 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:34:03.362377 kubelet[2289]: I0710 23:34:03.362344 2289 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:34:03.368114 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 23:34:03.381213 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 23:34:03.384771 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 23:34:03.404654 kubelet[2289]: E0710 23:34:03.404513 2289 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 23:34:03.405150 kubelet[2289]: I0710 23:34:03.404945 2289 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:34:03.405150 kubelet[2289]: I0710 23:34:03.404963 2289 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:34:03.405243 kubelet[2289]: I0710 23:34:03.405191 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:34:03.407363 kubelet[2289]: E0710 23:34:03.407336 2289 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:34:03.407781 kubelet[2289]: E0710 23:34:03.407762 2289 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 23:34:03.441411 kubelet[2289]: E0710 23:34:03.441376 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Jul 10 23:34:03.508751 kubelet[2289]: I0710 23:34:03.506813 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:34:03.509257 kubelet[2289]: E0710 23:34:03.509206 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jul 10 23:34:03.570992 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 10 23:34:03.601115 kubelet[2289]: E0710 23:34:03.601066 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:34:03.604343 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 10 23:34:03.606551 kubelet[2289]: E0710 23:34:03.606506 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:34:03.608839 systemd[1]: Created slice kubepods-burstable-podd15a20ebd02ecc6781380579adac5477.slice - libcontainer container kubepods-burstable-podd15a20ebd02ecc6781380579adac5477.slice. Jul 10 23:34:03.610333 kubelet[2289]: E0710 23:34:03.610291 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:34:03.711002 kubelet[2289]: I0710 23:34:03.710973 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:34:03.711297 kubelet[2289]: E0710 23:34:03.711268 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jul 10 23:34:03.742956 kubelet[2289]: I0710 23:34:03.742923 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:03.743037 kubelet[2289]: I0710 23:34:03.742961 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:03.743037 kubelet[2289]: I0710 23:34:03.742980 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 23:34:03.743082 kubelet[2289]: I0710 23:34:03.743038 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d15a20ebd02ecc6781380579adac5477-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d15a20ebd02ecc6781380579adac5477\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:03.743082 kubelet[2289]: I0710 23:34:03.743070 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:03.743121 kubelet[2289]: I0710 23:34:03.743088 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:03.743121 kubelet[2289]: I0710 23:34:03.743106 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:03.743157 kubelet[2289]: I0710 23:34:03.743120 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d15a20ebd02ecc6781380579adac5477-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d15a20ebd02ecc6781380579adac5477\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:03.743157 kubelet[2289]: I0710 23:34:03.743141 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d15a20ebd02ecc6781380579adac5477-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d15a20ebd02ecc6781380579adac5477\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:03.842731 kubelet[2289]: E0710 23:34:03.842685 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Jul 10 23:34:03.902157 kubelet[2289]: E0710 23:34:03.902116 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:03.902826 containerd[1506]: time="2025-07-10T23:34:03.902781160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 10 23:34:03.907436 kubelet[2289]: E0710 23:34:03.907404 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:03.907902 containerd[1506]: time="2025-07-10T23:34:03.907857305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 10 23:34:03.911414 kubelet[2289]: E0710 23:34:03.911349 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:03.912249 containerd[1506]: time="2025-07-10T23:34:03.912137833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d15a20ebd02ecc6781380579adac5477,Namespace:kube-system,Attempt:0,}" Jul 10 23:34:03.948827 containerd[1506]: time="2025-07-10T23:34:03.948760981Z" level=info msg="connecting to shim e6147ee075d882b777b3ab7e3f019ecfca3bc071b27e4b75031aa1874556dd4b" address="unix:///run/containerd/s/83e6ad1b0e9cffd66589ae06bb53e07df9c9623342a0074947b878e0fbcfe843" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:03.949979 containerd[1506]: time="2025-07-10T23:34:03.949924780Z" level=info msg="connecting to shim 7904b36fca3c5893ae8b2b0fca3af47c1da2d3832bfcee35ad80270582c3f936" address="unix:///run/containerd/s/3bb636954ca608c0fcacbb554dd1f67d186cf6ee29c8ca6e1c37d3a9db93733f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:03.968300 containerd[1506]: time="2025-07-10T23:34:03.968178460Z" level=info msg="connecting to shim 76b3eb7b90ba91f65466a5a32a489b85c385c946aa036a99c137839fd6471b22" address="unix:///run/containerd/s/03265c649a77bd9a7ae1bae8816035099096fc2c4c9ed6100d83a9ad3aad4b1b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:03.983880 systemd[1]: Started cri-containerd-7904b36fca3c5893ae8b2b0fca3af47c1da2d3832bfcee35ad80270582c3f936.scope - libcontainer container 7904b36fca3c5893ae8b2b0fca3af47c1da2d3832bfcee35ad80270582c3f936. Jul 10 23:34:03.985912 systemd[1]: Started cri-containerd-e6147ee075d882b777b3ab7e3f019ecfca3bc071b27e4b75031aa1874556dd4b.scope - libcontainer container e6147ee075d882b777b3ab7e3f019ecfca3bc071b27e4b75031aa1874556dd4b. Jul 10 23:34:04.011863 systemd[1]: Started cri-containerd-76b3eb7b90ba91f65466a5a32a489b85c385c946aa036a99c137839fd6471b22.scope - libcontainer container 76b3eb7b90ba91f65466a5a32a489b85c385c946aa036a99c137839fd6471b22. Jul 10 23:34:04.047672 containerd[1506]: time="2025-07-10T23:34:04.047585427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6147ee075d882b777b3ab7e3f019ecfca3bc071b27e4b75031aa1874556dd4b\"" Jul 10 23:34:04.048827 kubelet[2289]: E0710 23:34:04.048801 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:04.060660 containerd[1506]: time="2025-07-10T23:34:04.058734230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7904b36fca3c5893ae8b2b0fca3af47c1da2d3832bfcee35ad80270582c3f936\"" Jul 10 23:34:04.060660 containerd[1506]: time="2025-07-10T23:34:04.058875865Z" level=info msg="CreateContainer within sandbox \"e6147ee075d882b777b3ab7e3f019ecfca3bc071b27e4b75031aa1874556dd4b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 23:34:04.061595 kubelet[2289]: E0710 23:34:04.061566 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:04.066463 containerd[1506]: time="2025-07-10T23:34:04.066424989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d15a20ebd02ecc6781380579adac5477,Namespace:kube-system,Attempt:0,} returns sandbox id \"76b3eb7b90ba91f65466a5a32a489b85c385c946aa036a99c137839fd6471b22\"" Jul 10 23:34:04.067232 kubelet[2289]: E0710 23:34:04.067208 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:04.077292 containerd[1506]: time="2025-07-10T23:34:04.077087198Z" level=info msg="CreateContainer within sandbox \"7904b36fca3c5893ae8b2b0fca3af47c1da2d3832bfcee35ad80270582c3f936\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 23:34:04.080499 containerd[1506]: time="2025-07-10T23:34:04.079871376Z" level=info msg="CreateContainer within sandbox \"76b3eb7b90ba91f65466a5a32a489b85c385c946aa036a99c137839fd6471b22\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 23:34:04.089718 containerd[1506]: time="2025-07-10T23:34:04.089673007Z" level=info msg="Container 03138374d42755d4ef5b429996dbf77fc00f2f887addcd50b3339e4c827bb436: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:04.091808 containerd[1506]: time="2025-07-10T23:34:04.091752935Z" level=info msg="Container 98bcafba9ea2c3cbdf3b91ea10c315b4fd45d391f5a7aa8446d75521028fa5ac: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:04.098385 containerd[1506]: time="2025-07-10T23:34:04.098329830Z" level=info msg="Container 3035196947d3efe2dd1f80c2ee4ea2cd5fa88161668923dd03154601cb6c7ff9: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:04.108422 containerd[1506]: time="2025-07-10T23:34:04.108256162Z" level=info msg="CreateContainer within sandbox \"7904b36fca3c5893ae8b2b0fca3af47c1da2d3832bfcee35ad80270582c3f936\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"98bcafba9ea2c3cbdf3b91ea10c315b4fd45d391f5a7aa8446d75521028fa5ac\"" Jul 10 23:34:04.109061 containerd[1506]: time="2025-07-10T23:34:04.109039837Z" level=info msg="StartContainer for \"98bcafba9ea2c3cbdf3b91ea10c315b4fd45d391f5a7aa8446d75521028fa5ac\"" Jul 10 23:34:04.110197 containerd[1506]: time="2025-07-10T23:34:04.110149377Z" level=info msg="connecting to shim 98bcafba9ea2c3cbdf3b91ea10c315b4fd45d391f5a7aa8446d75521028fa5ac" address="unix:///run/containerd/s/3bb636954ca608c0fcacbb554dd1f67d186cf6ee29c8ca6e1c37d3a9db93733f" protocol=ttrpc version=3 Jul 10 23:34:04.110921 containerd[1506]: time="2025-07-10T23:34:04.110881531Z" level=info msg="CreateContainer within sandbox \"e6147ee075d882b777b3ab7e3f019ecfca3bc071b27e4b75031aa1874556dd4b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"03138374d42755d4ef5b429996dbf77fc00f2f887addcd50b3339e4c827bb436\"" Jul 10 23:34:04.111376 containerd[1506]: time="2025-07-10T23:34:04.111352633Z" level=info msg="StartContainer for \"03138374d42755d4ef5b429996dbf77fc00f2f887addcd50b3339e4c827bb436\"" Jul 10 23:34:04.112343 kubelet[2289]: I0710 23:34:04.112274 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:34:04.112652 kubelet[2289]: E0710 23:34:04.112604 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jul 10 23:34:04.112781 containerd[1506]: time="2025-07-10T23:34:04.112723826Z" level=info msg="connecting to shim 03138374d42755d4ef5b429996dbf77fc00f2f887addcd50b3339e4c827bb436" address="unix:///run/containerd/s/83e6ad1b0e9cffd66589ae06bb53e07df9c9623342a0074947b878e0fbcfe843" protocol=ttrpc version=3 Jul 10 23:34:04.114878 containerd[1506]: time="2025-07-10T23:34:04.114816523Z" level=info msg="CreateContainer within sandbox \"76b3eb7b90ba91f65466a5a32a489b85c385c946aa036a99c137839fd6471b22\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3035196947d3efe2dd1f80c2ee4ea2cd5fa88161668923dd03154601cb6c7ff9\"" Jul 10 23:34:04.115473 containerd[1506]: time="2025-07-10T23:34:04.115449757Z" level=info msg="StartContainer for \"3035196947d3efe2dd1f80c2ee4ea2cd5fa88161668923dd03154601cb6c7ff9\"" Jul 10 23:34:04.116714 containerd[1506]: time="2025-07-10T23:34:04.116676272Z" level=info msg="connecting to shim 3035196947d3efe2dd1f80c2ee4ea2cd5fa88161668923dd03154601cb6c7ff9" address="unix:///run/containerd/s/03265c649a77bd9a7ae1bae8816035099096fc2c4c9ed6100d83a9ad3aad4b1b" protocol=ttrpc version=3 Jul 10 23:34:04.128655 kubelet[2289]: E0710 23:34:04.128433 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 23:34:04.133847 systemd[1]: Started cri-containerd-03138374d42755d4ef5b429996dbf77fc00f2f887addcd50b3339e4c827bb436.scope - libcontainer container 03138374d42755d4ef5b429996dbf77fc00f2f887addcd50b3339e4c827bb436. Jul 10 23:34:04.135109 systemd[1]: Started cri-containerd-98bcafba9ea2c3cbdf3b91ea10c315b4fd45d391f5a7aa8446d75521028fa5ac.scope - libcontainer container 98bcafba9ea2c3cbdf3b91ea10c315b4fd45d391f5a7aa8446d75521028fa5ac. Jul 10 23:34:04.138262 systemd[1]: Started cri-containerd-3035196947d3efe2dd1f80c2ee4ea2cd5fa88161668923dd03154601cb6c7ff9.scope - libcontainer container 3035196947d3efe2dd1f80c2ee4ea2cd5fa88161668923dd03154601cb6c7ff9. Jul 10 23:34:04.211317 containerd[1506]: time="2025-07-10T23:34:04.211281614Z" level=info msg="StartContainer for \"98bcafba9ea2c3cbdf3b91ea10c315b4fd45d391f5a7aa8446d75521028fa5ac\" returns successfully" Jul 10 23:34:04.214114 containerd[1506]: time="2025-07-10T23:34:04.214034967Z" level=info msg="StartContainer for \"3035196947d3efe2dd1f80c2ee4ea2cd5fa88161668923dd03154601cb6c7ff9\" returns successfully" Jul 10 23:34:04.217647 containerd[1506]: time="2025-07-10T23:34:04.217541292Z" level=info msg="StartContainer for \"03138374d42755d4ef5b429996dbf77fc00f2f887addcd50b3339e4c827bb436\" returns successfully" Jul 10 23:34:04.267444 kubelet[2289]: E0710 23:34:04.267189 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:34:04.267917 kubelet[2289]: E0710 23:34:04.267888 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:04.270163 kubelet[2289]: E0710 23:34:04.270135 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:34:04.270709 kubelet[2289]: E0710 23:34:04.270405 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:04.273130 kubelet[2289]: E0710 23:34:04.273083 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:34:04.273327 kubelet[2289]: E0710 23:34:04.273307 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:04.311069 kubelet[2289]: E0710 23:34:04.311019 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 23:34:04.365004 kubelet[2289]: E0710 23:34:04.364963 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 23:34:04.915207 kubelet[2289]: I0710 23:34:04.914409 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:34:05.277719 kubelet[2289]: E0710 23:34:05.277572 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:34:05.277909 kubelet[2289]: E0710 23:34:05.277876 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:34:05.278031 kubelet[2289]: E0710 23:34:05.277971 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:05.278093 kubelet[2289]: E0710 23:34:05.278064 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:05.991924 kubelet[2289]: E0710 23:34:05.991785 2289 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 23:34:06.033649 kubelet[2289]: I0710 23:34:06.033589 2289 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 23:34:06.033955 kubelet[2289]: E0710 23:34:06.033814 2289 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 23:34:06.043479 kubelet[2289]: E0710 23:34:06.043403 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.143918 kubelet[2289]: E0710 23:34:06.143873 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.244751 kubelet[2289]: E0710 23:34:06.244427 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.344832 kubelet[2289]: E0710 23:34:06.344789 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.445689 kubelet[2289]: E0710 23:34:06.445646 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.546547 kubelet[2289]: E0710 23:34:06.546431 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.647085 kubelet[2289]: E0710 23:34:06.647039 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.747196 kubelet[2289]: E0710 23:34:06.747142 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.847908 kubelet[2289]: E0710 23:34:06.847788 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:06.948863 kubelet[2289]: E0710 23:34:06.948709 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:07.040341 kubelet[2289]: I0710 23:34:07.040256 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:07.053044 kubelet[2289]: I0710 23:34:07.053000 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 23:34:07.058197 kubelet[2289]: I0710 23:34:07.058154 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:07.229657 kubelet[2289]: I0710 23:34:07.229500 2289 apiserver.go:52] "Watching apiserver" Jul 10 23:34:07.233855 kubelet[2289]: E0710 23:34:07.233811 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:07.235149 kubelet[2289]: E0710 23:34:07.235113 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:07.235416 kubelet[2289]: E0710 23:34:07.235399 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:07.241298 kubelet[2289]: I0710 23:34:07.241144 2289 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:34:08.252749 systemd[1]: Reload requested from client PID 2574 ('systemctl') (unit session-7.scope)... Jul 10 23:34:08.252772 systemd[1]: Reloading... Jul 10 23:34:08.356023 zram_generator::config[2620]: No configuration found. Jul 10 23:34:08.367616 kubelet[2289]: E0710 23:34:08.367576 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:08.474230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:08.574085 systemd[1]: Reloading finished in 320 ms. Jul 10 23:34:08.600922 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:08.614250 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:34:08.614512 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:08.614573 systemd[1]: kubelet.service: Consumed 876ms CPU time, 131.6M memory peak. Jul 10 23:34:08.617401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:08.776236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:08.787072 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:34:08.822291 kubelet[2659]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:34:08.822291 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:34:08.822291 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:34:08.824781 kubelet[2659]: I0710 23:34:08.822327 2659 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:34:08.836691 kubelet[2659]: I0710 23:34:08.835696 2659 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 23:34:08.836691 kubelet[2659]: I0710 23:34:08.835732 2659 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:34:08.836691 kubelet[2659]: I0710 23:34:08.836006 2659 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 23:34:08.838384 kubelet[2659]: I0710 23:34:08.838347 2659 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 23:34:08.841541 kubelet[2659]: I0710 23:34:08.841472 2659 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:34:08.851668 kubelet[2659]: I0710 23:34:08.851613 2659 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 23:34:08.854300 kubelet[2659]: I0710 23:34:08.854269 2659 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:34:08.854499 kubelet[2659]: I0710 23:34:08.854470 2659 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:34:08.854701 kubelet[2659]: I0710 23:34:08.854502 2659 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:34:08.854801 kubelet[2659]: I0710 23:34:08.854713 2659 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:34:08.854801 kubelet[2659]: I0710 23:34:08.854722 2659 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 23:34:08.854801 kubelet[2659]: I0710 23:34:08.854762 2659 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:34:08.854916 kubelet[2659]: I0710 23:34:08.854903 2659 kubelet.go:480] "Attempting to sync node with API server" Jul 10 23:34:08.854943 kubelet[2659]: I0710 23:34:08.854919 2659 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:34:08.854964 kubelet[2659]: I0710 23:34:08.854947 2659 kubelet.go:386] "Adding apiserver pod source" Jul 10 23:34:08.854964 kubelet[2659]: I0710 23:34:08.854963 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:34:08.858586 kubelet[2659]: I0710 23:34:08.858550 2659 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 23:34:08.859351 kubelet[2659]: I0710 23:34:08.859316 2659 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 23:34:08.864954 kubelet[2659]: I0710 23:34:08.864918 2659 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:34:08.865080 kubelet[2659]: I0710 23:34:08.864971 2659 server.go:1289] "Started kubelet" Jul 10 23:34:08.867823 kubelet[2659]: I0710 23:34:08.867790 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:34:08.872755 kubelet[2659]: I0710 23:34:08.871904 2659 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:34:08.872893 kubelet[2659]: I0710 23:34:08.872820 2659 server.go:317] "Adding debug handlers to kubelet server" Jul 10 23:34:08.876672 kubelet[2659]: I0710 23:34:08.876257 2659 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:34:08.876672 kubelet[2659]: I0710 23:34:08.876482 2659 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:34:08.876968 kubelet[2659]: I0710 23:34:08.876919 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:34:08.879375 kubelet[2659]: I0710 23:34:08.877066 2659 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:34:08.880943 kubelet[2659]: I0710 23:34:08.877084 2659 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:34:08.880943 kubelet[2659]: E0710 23:34:08.877204 2659 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:34:08.881116 kubelet[2659]: I0710 23:34:08.881094 2659 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:34:08.881420 kubelet[2659]: I0710 23:34:08.881389 2659 factory.go:223] Registration of the systemd container factory successfully Jul 10 23:34:08.885175 kubelet[2659]: E0710 23:34:08.885133 2659 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:34:08.888185 kubelet[2659]: I0710 23:34:08.881498 2659 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:34:08.889478 kubelet[2659]: I0710 23:34:08.889283 2659 factory.go:223] Registration of the containerd container factory successfully Jul 10 23:34:08.896064 kubelet[2659]: I0710 23:34:08.895267 2659 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 23:34:08.898365 kubelet[2659]: I0710 23:34:08.898072 2659 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 23:34:08.898365 kubelet[2659]: I0710 23:34:08.898166 2659 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 23:34:08.898365 kubelet[2659]: I0710 23:34:08.898190 2659 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:34:08.898365 kubelet[2659]: I0710 23:34:08.898216 2659 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 23:34:08.898365 kubelet[2659]: E0710 23:34:08.898290 2659 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:34:08.927328 kubelet[2659]: I0710 23:34:08.927302 2659 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:34:08.927685 kubelet[2659]: I0710 23:34:08.927466 2659 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:34:08.927685 kubelet[2659]: I0710 23:34:08.927493 2659 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:34:08.927794 kubelet[2659]: I0710 23:34:08.927777 2659 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 23:34:08.927864 kubelet[2659]: I0710 23:34:08.927841 2659 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 23:34:08.927914 kubelet[2659]: I0710 23:34:08.927906 2659 policy_none.go:49] "None policy: Start" Jul 10 23:34:08.927964 kubelet[2659]: I0710 23:34:08.927956 2659 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:34:08.928012 kubelet[2659]: I0710 23:34:08.928005 2659 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:34:08.928168 kubelet[2659]: I0710 23:34:08.928154 2659 state_mem.go:75] "Updated machine memory state" Jul 10 23:34:08.934685 kubelet[2659]: E0710 23:34:08.934620 2659 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 23:34:08.934872 kubelet[2659]: I0710 23:34:08.934850 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:34:08.934919 kubelet[2659]: I0710 23:34:08.934869 2659 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:34:08.935104 kubelet[2659]: I0710 23:34:08.935082 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:34:08.938696 kubelet[2659]: E0710 23:34:08.938102 2659 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:34:08.999667 kubelet[2659]: I0710 23:34:08.999537 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:08.999667 kubelet[2659]: I0710 23:34:08.999653 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 23:34:08.999840 kubelet[2659]: I0710 23:34:08.999541 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:09.007242 kubelet[2659]: E0710 23:34:09.007190 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 23:34:09.007399 kubelet[2659]: E0710 23:34:09.007287 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:09.010652 kubelet[2659]: E0710 23:34:09.010591 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:09.039338 kubelet[2659]: I0710 23:34:09.039292 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:34:09.053981 kubelet[2659]: I0710 23:34:09.053943 2659 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 23:34:09.054150 kubelet[2659]: I0710 23:34:09.054044 2659 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 23:34:09.085464 kubelet[2659]: I0710 23:34:09.085278 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:09.085464 kubelet[2659]: I0710 23:34:09.085323 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:09.085464 kubelet[2659]: I0710 23:34:09.085344 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d15a20ebd02ecc6781380579adac5477-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d15a20ebd02ecc6781380579adac5477\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:09.085464 kubelet[2659]: I0710 23:34:09.085370 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d15a20ebd02ecc6781380579adac5477-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d15a20ebd02ecc6781380579adac5477\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:09.085464 kubelet[2659]: I0710 23:34:09.085388 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d15a20ebd02ecc6781380579adac5477-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d15a20ebd02ecc6781380579adac5477\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:09.085797 kubelet[2659]: I0710 23:34:09.085404 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:09.085797 kubelet[2659]: I0710 23:34:09.085419 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:09.085797 kubelet[2659]: I0710 23:34:09.085434 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:34:09.086656 kubelet[2659]: I0710 23:34:09.086619 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 23:34:09.308408 kubelet[2659]: E0710 23:34:09.308353 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:09.308549 kubelet[2659]: E0710 23:34:09.308439 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:09.311653 kubelet[2659]: E0710 23:34:09.311608 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:09.855982 kubelet[2659]: I0710 23:34:09.855926 2659 apiserver.go:52] "Watching apiserver" Jul 10 23:34:09.882060 kubelet[2659]: I0710 23:34:09.882015 2659 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:34:09.912922 kubelet[2659]: I0710 23:34:09.912888 2659 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:09.914473 kubelet[2659]: E0710 23:34:09.913084 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:09.914473 kubelet[2659]: E0710 23:34:09.912894 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:10.036096 kubelet[2659]: E0710 23:34:10.035229 2659 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 23:34:10.036244 kubelet[2659]: I0710 23:34:10.035266 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.03523503 podStartE2EDuration="3.03523503s" podCreationTimestamp="2025-07-10 23:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:34:10.034315745 +0000 UTC m=+1.243765250" watchObservedRunningTime="2025-07-10 23:34:10.03523503 +0000 UTC m=+1.244684495" Jul 10 23:34:10.036651 kubelet[2659]: E0710 23:34:10.036610 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:10.092967 kubelet[2659]: I0710 23:34:10.092883 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.092864572 podStartE2EDuration="3.092864572s" podCreationTimestamp="2025-07-10 23:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:34:10.082046382 +0000 UTC m=+1.291495887" watchObservedRunningTime="2025-07-10 23:34:10.092864572 +0000 UTC m=+1.302314157" Jul 10 23:34:10.093138 kubelet[2659]: I0710 23:34:10.092990 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.09298435 podStartE2EDuration="3.09298435s" podCreationTimestamp="2025-07-10 23:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:34:10.092577113 +0000 UTC m=+1.302026698" watchObservedRunningTime="2025-07-10 23:34:10.09298435 +0000 UTC m=+1.302433855" Jul 10 23:34:10.914278 kubelet[2659]: E0710 23:34:10.914236 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:10.914278 kubelet[2659]: E0710 23:34:10.914272 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:10.914654 kubelet[2659]: E0710 23:34:10.914432 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:12.232432 kubelet[2659]: E0710 23:34:12.231898 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:13.883530 kubelet[2659]: I0710 23:34:13.883442 2659 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 23:34:13.884337 containerd[1506]: time="2025-07-10T23:34:13.884234222Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 23:34:13.885154 kubelet[2659]: I0710 23:34:13.884478 2659 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 23:34:15.040927 systemd[1]: Created slice kubepods-besteffort-pod70364745_2f7b_441a_b5d1_d493efef5260.slice - libcontainer container kubepods-besteffort-pod70364745_2f7b_441a_b5d1_d493efef5260.slice. Jul 10 23:34:15.132942 kubelet[2659]: I0710 23:34:15.132891 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70364745-2f7b-441a-b5d1-d493efef5260-kube-proxy\") pod \"kube-proxy-jxc97\" (UID: \"70364745-2f7b-441a-b5d1-d493efef5260\") " pod="kube-system/kube-proxy-jxc97" Jul 10 23:34:15.133662 kubelet[2659]: I0710 23:34:15.133316 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70364745-2f7b-441a-b5d1-d493efef5260-xtables-lock\") pod \"kube-proxy-jxc97\" (UID: \"70364745-2f7b-441a-b5d1-d493efef5260\") " pod="kube-system/kube-proxy-jxc97" Jul 10 23:34:15.133662 kubelet[2659]: I0710 23:34:15.133353 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70364745-2f7b-441a-b5d1-d493efef5260-lib-modules\") pod \"kube-proxy-jxc97\" (UID: \"70364745-2f7b-441a-b5d1-d493efef5260\") " pod="kube-system/kube-proxy-jxc97" Jul 10 23:34:15.133662 kubelet[2659]: I0710 23:34:15.133370 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdlgj\" (UniqueName: \"kubernetes.io/projected/70364745-2f7b-441a-b5d1-d493efef5260-kube-api-access-rdlgj\") pod \"kube-proxy-jxc97\" (UID: \"70364745-2f7b-441a-b5d1-d493efef5260\") " pod="kube-system/kube-proxy-jxc97" Jul 10 23:34:15.217748 systemd[1]: Created slice kubepods-besteffort-pod7a8eab7b_cdce_40d7_83fb_8ec068447955.slice - libcontainer container kubepods-besteffort-pod7a8eab7b_cdce_40d7_83fb_8ec068447955.slice. Jul 10 23:34:15.234532 kubelet[2659]: I0710 23:34:15.234475 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a8eab7b-cdce-40d7-83fb-8ec068447955-var-lib-calico\") pod \"tigera-operator-747864d56d-d78qt\" (UID: \"7a8eab7b-cdce-40d7-83fb-8ec068447955\") " pod="tigera-operator/tigera-operator-747864d56d-d78qt" Jul 10 23:34:15.234532 kubelet[2659]: I0710 23:34:15.234537 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jlwq\" (UniqueName: \"kubernetes.io/projected/7a8eab7b-cdce-40d7-83fb-8ec068447955-kube-api-access-8jlwq\") pod \"tigera-operator-747864d56d-d78qt\" (UID: \"7a8eab7b-cdce-40d7-83fb-8ec068447955\") " pod="tigera-operator/tigera-operator-747864d56d-d78qt" Jul 10 23:34:15.353110 kubelet[2659]: E0710 23:34:15.353025 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:15.353911 containerd[1506]: time="2025-07-10T23:34:15.353858081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxc97,Uid:70364745-2f7b-441a-b5d1-d493efef5260,Namespace:kube-system,Attempt:0,}" Jul 10 23:34:15.380834 containerd[1506]: time="2025-07-10T23:34:15.380779947Z" level=info msg="connecting to shim bcd1c480cf6f38b532221daa6a409eb9ef92041502e13cfa3bdfb18b334ac2d5" address="unix:///run/containerd/s/864fa858c6001ddfd8866b17caf8cb515798d9e3c32c4a98128ee22e0ffac46e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:15.411867 systemd[1]: Started cri-containerd-bcd1c480cf6f38b532221daa6a409eb9ef92041502e13cfa3bdfb18b334ac2d5.scope - libcontainer container bcd1c480cf6f38b532221daa6a409eb9ef92041502e13cfa3bdfb18b334ac2d5. Jul 10 23:34:15.436241 containerd[1506]: time="2025-07-10T23:34:15.436203015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxc97,Uid:70364745-2f7b-441a-b5d1-d493efef5260,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcd1c480cf6f38b532221daa6a409eb9ef92041502e13cfa3bdfb18b334ac2d5\"" Jul 10 23:34:15.437352 kubelet[2659]: E0710 23:34:15.437233 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:15.446600 containerd[1506]: time="2025-07-10T23:34:15.446063214Z" level=info msg="CreateContainer within sandbox \"bcd1c480cf6f38b532221daa6a409eb9ef92041502e13cfa3bdfb18b334ac2d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 23:34:15.465671 containerd[1506]: time="2025-07-10T23:34:15.465267183Z" level=info msg="Container c5284f375a1e00918918240a07fed2b563a6a923166cfd77dce5e96759cc82c8: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:15.481614 containerd[1506]: time="2025-07-10T23:34:15.481560610Z" level=info msg="CreateContainer within sandbox \"bcd1c480cf6f38b532221daa6a409eb9ef92041502e13cfa3bdfb18b334ac2d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5284f375a1e00918918240a07fed2b563a6a923166cfd77dce5e96759cc82c8\"" Jul 10 23:34:15.482552 containerd[1506]: time="2025-07-10T23:34:15.482405879Z" level=info msg="StartContainer for \"c5284f375a1e00918918240a07fed2b563a6a923166cfd77dce5e96759cc82c8\"" Jul 10 23:34:15.483803 containerd[1506]: time="2025-07-10T23:34:15.483777139Z" level=info msg="connecting to shim c5284f375a1e00918918240a07fed2b563a6a923166cfd77dce5e96759cc82c8" address="unix:///run/containerd/s/864fa858c6001ddfd8866b17caf8cb515798d9e3c32c4a98128ee22e0ffac46e" protocol=ttrpc version=3 Jul 10 23:34:15.509842 systemd[1]: Started cri-containerd-c5284f375a1e00918918240a07fed2b563a6a923166cfd77dce5e96759cc82c8.scope - libcontainer container c5284f375a1e00918918240a07fed2b563a6a923166cfd77dce5e96759cc82c8. Jul 10 23:34:15.521533 containerd[1506]: time="2025-07-10T23:34:15.521422479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-d78qt,Uid:7a8eab7b-cdce-40d7-83fb-8ec068447955,Namespace:tigera-operator,Attempt:0,}" Jul 10 23:34:15.565954 containerd[1506]: time="2025-07-10T23:34:15.565915439Z" level=info msg="StartContainer for \"c5284f375a1e00918918240a07fed2b563a6a923166cfd77dce5e96759cc82c8\" returns successfully" Jul 10 23:34:15.604016 containerd[1506]: time="2025-07-10T23:34:15.603828156Z" level=info msg="connecting to shim 942c0da3ba63599e0a2aaf9fd6dcd409e39f97762a535d7bea75404bb55d1ee0" address="unix:///run/containerd/s/ea418bb990665cc198dd27fb04e098a6faaf7d2714c6e1d111a820c483062506" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:15.625869 systemd[1]: Started cri-containerd-942c0da3ba63599e0a2aaf9fd6dcd409e39f97762a535d7bea75404bb55d1ee0.scope - libcontainer container 942c0da3ba63599e0a2aaf9fd6dcd409e39f97762a535d7bea75404bb55d1ee0. Jul 10 23:34:15.669253 containerd[1506]: time="2025-07-10T23:34:15.669200656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-d78qt,Uid:7a8eab7b-cdce-40d7-83fb-8ec068447955,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"942c0da3ba63599e0a2aaf9fd6dcd409e39f97762a535d7bea75404bb55d1ee0\"" Jul 10 23:34:15.671061 containerd[1506]: time="2025-07-10T23:34:15.670959938Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 23:34:15.926737 kubelet[2659]: E0710 23:34:15.926445 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:15.937531 kubelet[2659]: I0710 23:34:15.937415 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxc97" podStartSLOduration=1.9373933829999999 podStartE2EDuration="1.937393383s" podCreationTimestamp="2025-07-10 23:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:34:15.935796 +0000 UTC m=+7.145245465" watchObservedRunningTime="2025-07-10 23:34:15.937393383 +0000 UTC m=+7.146842928" Jul 10 23:34:16.261308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467107556.mount: Deactivated successfully. Jul 10 23:34:17.070016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352697449.mount: Deactivated successfully. Jul 10 23:34:17.322809 kubelet[2659]: E0710 23:34:17.320405 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:17.628254 containerd[1506]: time="2025-07-10T23:34:17.628054522Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:17.629442 containerd[1506]: time="2025-07-10T23:34:17.629374674Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 10 23:34:17.630178 containerd[1506]: time="2025-07-10T23:34:17.630136563Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:17.637038 containerd[1506]: time="2025-07-10T23:34:17.636971040Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:17.637764 containerd[1506]: time="2025-07-10T23:34:17.637728248Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.966731137s" Jul 10 23:34:17.637808 containerd[1506]: time="2025-07-10T23:34:17.637770102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 10 23:34:17.647211 containerd[1506]: time="2025-07-10T23:34:17.647146211Z" level=info msg="CreateContainer within sandbox \"942c0da3ba63599e0a2aaf9fd6dcd409e39f97762a535d7bea75404bb55d1ee0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 23:34:17.657048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109733788.mount: Deactivated successfully. Jul 10 23:34:17.658736 containerd[1506]: time="2025-07-10T23:34:17.657760085Z" level=info msg="Container 3ea066e4e20114b80153671595c08d247b88859fed1176dc94f134ca0ac5e2e4: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:17.665349 containerd[1506]: time="2025-07-10T23:34:17.665292630Z" level=info msg="CreateContainer within sandbox \"942c0da3ba63599e0a2aaf9fd6dcd409e39f97762a535d7bea75404bb55d1ee0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3ea066e4e20114b80153671595c08d247b88859fed1176dc94f134ca0ac5e2e4\"" Jul 10 23:34:17.666996 containerd[1506]: time="2025-07-10T23:34:17.666961576Z" level=info msg="StartContainer for \"3ea066e4e20114b80153671595c08d247b88859fed1176dc94f134ca0ac5e2e4\"" Jul 10 23:34:17.668539 containerd[1506]: time="2025-07-10T23:34:17.668085144Z" level=info msg="connecting to shim 3ea066e4e20114b80153671595c08d247b88859fed1176dc94f134ca0ac5e2e4" address="unix:///run/containerd/s/ea418bb990665cc198dd27fb04e098a6faaf7d2714c6e1d111a820c483062506" protocol=ttrpc version=3 Jul 10 23:34:17.706954 systemd[1]: Started cri-containerd-3ea066e4e20114b80153671595c08d247b88859fed1176dc94f134ca0ac5e2e4.scope - libcontainer container 3ea066e4e20114b80153671595c08d247b88859fed1176dc94f134ca0ac5e2e4. Jul 10 23:34:17.800013 containerd[1506]: time="2025-07-10T23:34:17.799962548Z" level=info msg="StartContainer for \"3ea066e4e20114b80153671595c08d247b88859fed1176dc94f134ca0ac5e2e4\" returns successfully" Jul 10 23:34:17.937440 kubelet[2659]: E0710 23:34:17.937294 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:17.968589 kubelet[2659]: I0710 23:34:17.968528 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-d78qt" podStartSLOduration=0.998151773 podStartE2EDuration="2.968508395s" podCreationTimestamp="2025-07-10 23:34:15 +0000 UTC" firstStartedPulling="2025-07-10 23:34:15.670493808 +0000 UTC m=+6.879943313" lastFinishedPulling="2025-07-10 23:34:17.64085047 +0000 UTC m=+8.850299935" observedRunningTime="2025-07-10 23:34:17.951747989 +0000 UTC m=+9.161197494" watchObservedRunningTime="2025-07-10 23:34:17.968508395 +0000 UTC m=+9.177957900" Jul 10 23:34:18.943842 kubelet[2659]: E0710 23:34:18.943347 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:20.918940 kubelet[2659]: E0710 23:34:20.918849 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:22.239560 kubelet[2659]: E0710 23:34:22.239502 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:22.492393 update_engine[1492]: I20250710 23:34:22.491619 1492 update_attempter.cc:509] Updating boot flags... Jul 10 23:34:22.944766 kubelet[2659]: E0710 23:34:22.944259 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:23.499176 sudo[1720]: pam_unix(sudo:session): session closed for user root Jul 10 23:34:23.502526 sshd[1719]: Connection closed by 10.0.0.1 port 33236 Jul 10 23:34:23.502013 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jul 10 23:34:23.507136 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:33236.service: Deactivated successfully. Jul 10 23:34:23.510839 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 23:34:23.511225 systemd[1]: session-7.scope: Consumed 7.336s CPU time, 219.4M memory peak. Jul 10 23:34:23.514608 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Jul 10 23:34:23.518294 systemd-logind[1478]: Removed session 7. Jul 10 23:34:30.009901 systemd[1]: Created slice kubepods-besteffort-pod4e9bed63_65fe_407a_a135_de054df54793.slice - libcontainer container kubepods-besteffort-pod4e9bed63_65fe_407a_a135_de054df54793.slice. Jul 10 23:34:30.034276 kubelet[2659]: I0710 23:34:30.034227 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e9bed63-65fe-407a-a135-de054df54793-tigera-ca-bundle\") pod \"calico-typha-696599bbd7-wldt2\" (UID: \"4e9bed63-65fe-407a-a135-de054df54793\") " pod="calico-system/calico-typha-696599bbd7-wldt2" Jul 10 23:34:30.034276 kubelet[2659]: I0710 23:34:30.034279 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4e9bed63-65fe-407a-a135-de054df54793-typha-certs\") pod \"calico-typha-696599bbd7-wldt2\" (UID: \"4e9bed63-65fe-407a-a135-de054df54793\") " pod="calico-system/calico-typha-696599bbd7-wldt2" Jul 10 23:34:30.034694 kubelet[2659]: I0710 23:34:30.034300 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt62h\" (UniqueName: \"kubernetes.io/projected/4e9bed63-65fe-407a-a135-de054df54793-kube-api-access-rt62h\") pod \"calico-typha-696599bbd7-wldt2\" (UID: \"4e9bed63-65fe-407a-a135-de054df54793\") " pod="calico-system/calico-typha-696599bbd7-wldt2" Jul 10 23:34:30.289277 systemd[1]: Created slice kubepods-besteffort-pod2d078d36_6310_4fc3_83ea_bfa9fd76657f.slice - libcontainer container kubepods-besteffort-pod2d078d36_6310_4fc3_83ea_bfa9fd76657f.slice. Jul 10 23:34:30.320636 kubelet[2659]: E0710 23:34:30.320592 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:30.336072 kubelet[2659]: I0710 23:34:30.336027 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-var-lib-calico\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.336147 kubelet[2659]: I0710 23:34:30.336080 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-var-run-calico\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.336147 kubelet[2659]: I0710 23:34:30.336115 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-xtables-lock\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.336147 kubelet[2659]: I0710 23:34:30.336132 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-cni-bin-dir\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.336147 kubelet[2659]: I0710 23:34:30.336148 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-cni-log-dir\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.337705 kubelet[2659]: I0710 23:34:30.336163 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-flexvol-driver-host\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.337813 kubelet[2659]: I0710 23:34:30.337732 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-policysync\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.337813 kubelet[2659]: I0710 23:34:30.337775 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-lib-modules\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.337813 kubelet[2659]: I0710 23:34:30.337791 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2d078d36-6310-4fc3-83ea-bfa9fd76657f-node-certs\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.337813 kubelet[2659]: I0710 23:34:30.337806 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2d078d36-6310-4fc3-83ea-bfa9fd76657f-cni-net-dir\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.337903 kubelet[2659]: I0710 23:34:30.337821 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d078d36-6310-4fc3-83ea-bfa9fd76657f-tigera-ca-bundle\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.337903 kubelet[2659]: I0710 23:34:30.337840 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktpzn\" (UniqueName: \"kubernetes.io/projected/2d078d36-6310-4fc3-83ea-bfa9fd76657f-kube-api-access-ktpzn\") pod \"calico-node-qnkps\" (UID: \"2d078d36-6310-4fc3-83ea-bfa9fd76657f\") " pod="calico-system/calico-node-qnkps" Jul 10 23:34:30.338570 containerd[1506]: time="2025-07-10T23:34:30.338530193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-696599bbd7-wldt2,Uid:4e9bed63-65fe-407a-a135-de054df54793,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:30.463320 kubelet[2659]: E0710 23:34:30.463274 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.463320 kubelet[2659]: W0710 23:34:30.463313 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.463486 kubelet[2659]: E0710 23:34:30.463351 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.463688 kubelet[2659]: E0710 23:34:30.463672 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.463688 kubelet[2659]: W0710 23:34:30.463685 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.463740 kubelet[2659]: E0710 23:34:30.463696 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.476434 kubelet[2659]: E0710 23:34:30.476404 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.476434 kubelet[2659]: W0710 23:34:30.476426 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.476572 kubelet[2659]: E0710 23:34:30.476447 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.507067 kubelet[2659]: E0710 23:34:30.506402 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx5zr" podUID="6e5a2663-9022-49fc-bebc-a43168cdc7dc" Jul 10 23:34:30.528715 kubelet[2659]: E0710 23:34:30.528677 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.528715 kubelet[2659]: W0710 23:34:30.528705 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.528867 kubelet[2659]: E0710 23:34:30.528727 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.528960 kubelet[2659]: E0710 23:34:30.528942 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.529003 kubelet[2659]: W0710 23:34:30.528955 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.529003 kubelet[2659]: E0710 23:34:30.529002 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.529266 kubelet[2659]: E0710 23:34:30.529237 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.529266 kubelet[2659]: W0710 23:34:30.529265 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.529320 kubelet[2659]: E0710 23:34:30.529276 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.529457 kubelet[2659]: E0710 23:34:30.529441 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.529457 kubelet[2659]: W0710 23:34:30.529453 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.529502 kubelet[2659]: E0710 23:34:30.529462 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.529660 kubelet[2659]: E0710 23:34:30.529644 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.529660 kubelet[2659]: W0710 23:34:30.529657 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.529759 kubelet[2659]: E0710 23:34:30.529669 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.530200 kubelet[2659]: E0710 23:34:30.530177 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.530233 kubelet[2659]: W0710 23:34:30.530198 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.530273 kubelet[2659]: E0710 23:34:30.530233 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.530551 kubelet[2659]: E0710 23:34:30.530536 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.530551 kubelet[2659]: W0710 23:34:30.530547 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.530599 kubelet[2659]: E0710 23:34:30.530557 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.530759 kubelet[2659]: E0710 23:34:30.530743 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.530759 kubelet[2659]: W0710 23:34:30.530755 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.530819 kubelet[2659]: E0710 23:34:30.530763 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.534743 kubelet[2659]: E0710 23:34:30.534706 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.534743 kubelet[2659]: W0710 23:34:30.534730 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.534743 kubelet[2659]: E0710 23:34:30.534749 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.536190 kubelet[2659]: E0710 23:34:30.536163 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.536737 kubelet[2659]: W0710 23:34:30.536676 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.536779 kubelet[2659]: E0710 23:34:30.536749 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.538197 kubelet[2659]: E0710 23:34:30.538117 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.538197 kubelet[2659]: W0710 23:34:30.538145 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.538197 kubelet[2659]: E0710 23:34:30.538169 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.539824 kubelet[2659]: E0710 23:34:30.539721 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.539824 kubelet[2659]: W0710 23:34:30.539748 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.539824 kubelet[2659]: E0710 23:34:30.539768 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.540779 kubelet[2659]: E0710 23:34:30.540056 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.540779 kubelet[2659]: W0710 23:34:30.540068 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.540779 kubelet[2659]: E0710 23:34:30.540079 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.541346 kubelet[2659]: E0710 23:34:30.541209 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.541346 kubelet[2659]: W0710 23:34:30.541228 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.541346 kubelet[2659]: E0710 23:34:30.541242 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.541624 kubelet[2659]: E0710 23:34:30.541592 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.541624 kubelet[2659]: W0710 23:34:30.541611 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.541624 kubelet[2659]: E0710 23:34:30.541625 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.542823 kubelet[2659]: E0710 23:34:30.542788 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.542823 kubelet[2659]: W0710 23:34:30.542816 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.542896 kubelet[2659]: E0710 23:34:30.542838 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.543767 kubelet[2659]: E0710 23:34:30.543204 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.543767 kubelet[2659]: W0710 23:34:30.543221 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.543767 kubelet[2659]: E0710 23:34:30.543232 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.543767 kubelet[2659]: E0710 23:34:30.543460 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.543767 kubelet[2659]: W0710 23:34:30.543472 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.543767 kubelet[2659]: E0710 23:34:30.543481 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.543767 kubelet[2659]: E0710 23:34:30.543718 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.543767 kubelet[2659]: W0710 23:34:30.543747 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.543767 kubelet[2659]: E0710 23:34:30.543761 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.544110 kubelet[2659]: E0710 23:34:30.544084 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.544110 kubelet[2659]: W0710 23:34:30.544102 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.544181 kubelet[2659]: E0710 23:34:30.544116 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.544704 kubelet[2659]: E0710 23:34:30.544670 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.544704 kubelet[2659]: W0710 23:34:30.544692 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.544784 kubelet[2659]: E0710 23:34:30.544714 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.544991 kubelet[2659]: I0710 23:34:30.544962 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e5a2663-9022-49fc-bebc-a43168cdc7dc-kubelet-dir\") pod \"csi-node-driver-rx5zr\" (UID: \"6e5a2663-9022-49fc-bebc-a43168cdc7dc\") " pod="calico-system/csi-node-driver-rx5zr" Jul 10 23:34:30.547667 kubelet[2659]: E0710 23:34:30.547621 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.547667 kubelet[2659]: W0710 23:34:30.547659 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.547667 kubelet[2659]: E0710 23:34:30.547675 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.547821 kubelet[2659]: I0710 23:34:30.547708 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e5a2663-9022-49fc-bebc-a43168cdc7dc-socket-dir\") pod \"csi-node-driver-rx5zr\" (UID: \"6e5a2663-9022-49fc-bebc-a43168cdc7dc\") " pod="calico-system/csi-node-driver-rx5zr" Jul 10 23:34:30.548053 kubelet[2659]: E0710 23:34:30.548031 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.548053 kubelet[2659]: W0710 23:34:30.548047 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.548108 kubelet[2659]: E0710 23:34:30.548059 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.548289 kubelet[2659]: E0710 23:34:30.548270 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.548289 kubelet[2659]: W0710 23:34:30.548282 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.548362 kubelet[2659]: E0710 23:34:30.548292 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.549813 kubelet[2659]: E0710 23:34:30.549789 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.549813 kubelet[2659]: W0710 23:34:30.549804 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.549813 kubelet[2659]: E0710 23:34:30.549817 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.549946 kubelet[2659]: I0710 23:34:30.549852 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e5a2663-9022-49fc-bebc-a43168cdc7dc-registration-dir\") pod \"csi-node-driver-rx5zr\" (UID: \"6e5a2663-9022-49fc-bebc-a43168cdc7dc\") " pod="calico-system/csi-node-driver-rx5zr" Jul 10 23:34:30.550209 kubelet[2659]: E0710 23:34:30.550180 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.550209 kubelet[2659]: W0710 23:34:30.550199 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.550209 kubelet[2659]: E0710 23:34:30.550211 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.550317 kubelet[2659]: I0710 23:34:30.550299 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84tq\" (UniqueName: \"kubernetes.io/projected/6e5a2663-9022-49fc-bebc-a43168cdc7dc-kube-api-access-r84tq\") pod \"csi-node-driver-rx5zr\" (UID: \"6e5a2663-9022-49fc-bebc-a43168cdc7dc\") " pod="calico-system/csi-node-driver-rx5zr" Jul 10 23:34:30.550535 kubelet[2659]: E0710 23:34:30.550517 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.550535 kubelet[2659]: W0710 23:34:30.550533 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.550609 kubelet[2659]: E0710 23:34:30.550543 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.550806 kubelet[2659]: E0710 23:34:30.550788 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.550806 kubelet[2659]: W0710 23:34:30.550802 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.550907 kubelet[2659]: E0710 23:34:30.550812 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.552192 kubelet[2659]: E0710 23:34:30.551921 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.552192 kubelet[2659]: W0710 23:34:30.551942 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.552192 kubelet[2659]: E0710 23:34:30.551959 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.552192 kubelet[2659]: I0710 23:34:30.551988 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6e5a2663-9022-49fc-bebc-a43168cdc7dc-varrun\") pod \"csi-node-driver-rx5zr\" (UID: \"6e5a2663-9022-49fc-bebc-a43168cdc7dc\") " pod="calico-system/csi-node-driver-rx5zr" Jul 10 23:34:30.555643 kubelet[2659]: E0710 23:34:30.554736 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.555643 kubelet[2659]: W0710 23:34:30.554770 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.555643 kubelet[2659]: E0710 23:34:30.554816 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.557242 kubelet[2659]: E0710 23:34:30.557216 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.557242 kubelet[2659]: W0710 23:34:30.557240 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.557310 kubelet[2659]: E0710 23:34:30.557257 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.557561 kubelet[2659]: E0710 23:34:30.557542 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.557561 kubelet[2659]: W0710 23:34:30.557555 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.557613 kubelet[2659]: E0710 23:34:30.557566 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.558473 kubelet[2659]: E0710 23:34:30.558447 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.558519 kubelet[2659]: W0710 23:34:30.558502 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.558547 kubelet[2659]: E0710 23:34:30.558523 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.560398 kubelet[2659]: E0710 23:34:30.560374 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.560398 kubelet[2659]: W0710 23:34:30.560395 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.560457 kubelet[2659]: E0710 23:34:30.560410 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.560678 kubelet[2659]: E0710 23:34:30.560662 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.560678 kubelet[2659]: W0710 23:34:30.560675 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.560739 kubelet[2659]: E0710 23:34:30.560686 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.569396 containerd[1506]: time="2025-07-10T23:34:30.569347649Z" level=info msg="connecting to shim 702b5e37ca06392834d7eebfbdf4ed429b8be070f6482d4acf23be3457b66e5b" address="unix:///run/containerd/s/4e63de1fa348a1030ae31097143dabc5565a8e057596c7fea8a659da2e485f37" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:30.595449 containerd[1506]: time="2025-07-10T23:34:30.595408290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qnkps,Uid:2d078d36-6310-4fc3-83ea-bfa9fd76657f,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:30.596843 systemd[1]: Started cri-containerd-702b5e37ca06392834d7eebfbdf4ed429b8be070f6482d4acf23be3457b66e5b.scope - libcontainer container 702b5e37ca06392834d7eebfbdf4ed429b8be070f6482d4acf23be3457b66e5b. Jul 10 23:34:30.661967 kubelet[2659]: E0710 23:34:30.661803 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.661967 kubelet[2659]: W0710 23:34:30.661851 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.661967 kubelet[2659]: E0710 23:34:30.661876 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.662489 kubelet[2659]: E0710 23:34:30.662466 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.662489 kubelet[2659]: W0710 23:34:30.662485 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.662579 kubelet[2659]: E0710 23:34:30.662503 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.662830 kubelet[2659]: E0710 23:34:30.662748 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.662830 kubelet[2659]: W0710 23:34:30.662758 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.662830 kubelet[2659]: E0710 23:34:30.662768 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.663098 kubelet[2659]: E0710 23:34:30.663023 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.663098 kubelet[2659]: W0710 23:34:30.663038 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.663098 kubelet[2659]: E0710 23:34:30.663048 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.663257 kubelet[2659]: E0710 23:34:30.663232 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.663257 kubelet[2659]: W0710 23:34:30.663245 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.663257 kubelet[2659]: E0710 23:34:30.663255 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.663745 kubelet[2659]: E0710 23:34:30.663724 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.663745 kubelet[2659]: W0710 23:34:30.663743 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.663984 kubelet[2659]: E0710 23:34:30.663756 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.664850 kubelet[2659]: E0710 23:34:30.664808 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.664850 kubelet[2659]: W0710 23:34:30.664829 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.664850 kubelet[2659]: E0710 23:34:30.664842 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.665433 kubelet[2659]: E0710 23:34:30.665410 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.665433 kubelet[2659]: W0710 23:34:30.665430 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.665517 kubelet[2659]: E0710 23:34:30.665443 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.665881 kubelet[2659]: E0710 23:34:30.665666 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.665881 kubelet[2659]: W0710 23:34:30.665706 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.665881 kubelet[2659]: E0710 23:34:30.665718 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.667916 kubelet[2659]: E0710 23:34:30.667882 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.667916 kubelet[2659]: W0710 23:34:30.667907 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.667916 kubelet[2659]: E0710 23:34:30.667924 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.668081 containerd[1506]: time="2025-07-10T23:34:30.668003255Z" level=info msg="connecting to shim 874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a" address="unix:///run/containerd/s/6195371bd501ef1abe33785e337d07f9bab31837235127083657c7e33160d452" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:30.668348 kubelet[2659]: E0710 23:34:30.668146 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.668348 kubelet[2659]: W0710 23:34:30.668348 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.668432 kubelet[2659]: E0710 23:34:30.668364 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.673434 kubelet[2659]: E0710 23:34:30.671729 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.673434 kubelet[2659]: W0710 23:34:30.671780 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.673434 kubelet[2659]: E0710 23:34:30.671798 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.673434 kubelet[2659]: E0710 23:34:30.672769 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.673434 kubelet[2659]: W0710 23:34:30.672787 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.673434 kubelet[2659]: E0710 23:34:30.672802 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.673434 kubelet[2659]: E0710 23:34:30.673042 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.673434 kubelet[2659]: W0710 23:34:30.673052 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.673434 kubelet[2659]: E0710 23:34:30.673062 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.673434 kubelet[2659]: E0710 23:34:30.673250 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.673780 kubelet[2659]: W0710 23:34:30.673260 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.673780 kubelet[2659]: E0710 23:34:30.673269 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.673780 kubelet[2659]: E0710 23:34:30.673441 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.673780 kubelet[2659]: W0710 23:34:30.673449 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.673780 kubelet[2659]: E0710 23:34:30.673458 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.673780 kubelet[2659]: E0710 23:34:30.673610 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.673780 kubelet[2659]: W0710 23:34:30.673618 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.673780 kubelet[2659]: E0710 23:34:30.673626 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.674825 kubelet[2659]: E0710 23:34:30.674789 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.674825 kubelet[2659]: W0710 23:34:30.674811 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.674825 kubelet[2659]: E0710 23:34:30.674825 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.677699 kubelet[2659]: E0710 23:34:30.676745 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.677699 kubelet[2659]: W0710 23:34:30.676767 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.677699 kubelet[2659]: E0710 23:34:30.676784 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.677841 kubelet[2659]: E0710 23:34:30.677736 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.677841 kubelet[2659]: W0710 23:34:30.677759 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.677841 kubelet[2659]: E0710 23:34:30.677778 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.679656 kubelet[2659]: E0710 23:34:30.679073 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.679656 kubelet[2659]: W0710 23:34:30.679095 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.679656 kubelet[2659]: E0710 23:34:30.679113 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.682666 kubelet[2659]: E0710 23:34:30.680492 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.682666 kubelet[2659]: W0710 23:34:30.680513 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.682666 kubelet[2659]: E0710 23:34:30.680532 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.683006 kubelet[2659]: E0710 23:34:30.682974 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.683006 kubelet[2659]: W0710 23:34:30.682997 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.683065 kubelet[2659]: E0710 23:34:30.683018 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.685465 kubelet[2659]: E0710 23:34:30.685420 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.685465 kubelet[2659]: W0710 23:34:30.685445 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.685465 kubelet[2659]: E0710 23:34:30.685465 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.687799 kubelet[2659]: E0710 23:34:30.687766 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.687889 kubelet[2659]: W0710 23:34:30.687829 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.687889 kubelet[2659]: E0710 23:34:30.687852 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.702541 kubelet[2659]: E0710 23:34:30.701879 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:30.702541 kubelet[2659]: W0710 23:34:30.701899 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:30.702541 kubelet[2659]: E0710 23:34:30.701926 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:30.709029 systemd[1]: Started cri-containerd-874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a.scope - libcontainer container 874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a. Jul 10 23:34:30.788685 containerd[1506]: time="2025-07-10T23:34:30.788616878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-696599bbd7-wldt2,Uid:4e9bed63-65fe-407a-a135-de054df54793,Namespace:calico-system,Attempt:0,} returns sandbox id \"702b5e37ca06392834d7eebfbdf4ed429b8be070f6482d4acf23be3457b66e5b\"" Jul 10 23:34:30.800588 containerd[1506]: time="2025-07-10T23:34:30.799624971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qnkps,Uid:2d078d36-6310-4fc3-83ea-bfa9fd76657f,Namespace:calico-system,Attempt:0,} returns sandbox id \"874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a\"" Jul 10 23:34:30.800718 kubelet[2659]: E0710 23:34:30.799989 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:30.814657 containerd[1506]: time="2025-07-10T23:34:30.814588385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 23:34:31.900923 kubelet[2659]: E0710 23:34:31.899142 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx5zr" podUID="6e5a2663-9022-49fc-bebc-a43168cdc7dc" Jul 10 23:34:32.215897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737528359.mount: Deactivated successfully. Jul 10 23:34:32.673819 containerd[1506]: time="2025-07-10T23:34:32.673130093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:32.683954 containerd[1506]: time="2025-07-10T23:34:32.683901151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 10 23:34:32.701014 containerd[1506]: time="2025-07-10T23:34:32.700934316Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:32.717594 containerd[1506]: time="2025-07-10T23:34:32.717529333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:32.719076 containerd[1506]: time="2025-07-10T23:34:32.718942716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.904271157s" Jul 10 23:34:32.719076 containerd[1506]: time="2025-07-10T23:34:32.718981362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 10 23:34:32.721281 containerd[1506]: time="2025-07-10T23:34:32.721045727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 23:34:32.762206 containerd[1506]: time="2025-07-10T23:34:32.762146567Z" level=info msg="CreateContainer within sandbox \"702b5e37ca06392834d7eebfbdf4ed429b8be070f6482d4acf23be3457b66e5b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 23:34:32.854560 containerd[1506]: time="2025-07-10T23:34:32.853833023Z" level=info msg="Container 370cbd3276d3907cbc37839827abc42cafa53653b42641ce6cb23a1ffe664284: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:32.932713 containerd[1506]: time="2025-07-10T23:34:32.932317197Z" level=info msg="CreateContainer within sandbox \"702b5e37ca06392834d7eebfbdf4ed429b8be070f6482d4acf23be3457b66e5b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"370cbd3276d3907cbc37839827abc42cafa53653b42641ce6cb23a1ffe664284\"" Jul 10 23:34:32.933208 containerd[1506]: time="2025-07-10T23:34:32.933106761Z" level=info msg="StartContainer for \"370cbd3276d3907cbc37839827abc42cafa53653b42641ce6cb23a1ffe664284\"" Jul 10 23:34:32.934996 containerd[1506]: time="2025-07-10T23:34:32.934712574Z" level=info msg="connecting to shim 370cbd3276d3907cbc37839827abc42cafa53653b42641ce6cb23a1ffe664284" address="unix:///run/containerd/s/4e63de1fa348a1030ae31097143dabc5565a8e057596c7fea8a659da2e485f37" protocol=ttrpc version=3 Jul 10 23:34:32.963876 systemd[1]: Started cri-containerd-370cbd3276d3907cbc37839827abc42cafa53653b42641ce6cb23a1ffe664284.scope - libcontainer container 370cbd3276d3907cbc37839827abc42cafa53653b42641ce6cb23a1ffe664284. Jul 10 23:34:33.107630 containerd[1506]: time="2025-07-10T23:34:33.107590376Z" level=info msg="StartContainer for \"370cbd3276d3907cbc37839827abc42cafa53653b42641ce6cb23a1ffe664284\" returns successfully" Jul 10 23:34:33.899831 kubelet[2659]: E0710 23:34:33.899368 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx5zr" podUID="6e5a2663-9022-49fc-bebc-a43168cdc7dc" Jul 10 23:34:33.980730 kubelet[2659]: E0710 23:34:33.980671 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:34.068505 kubelet[2659]: E0710 23:34:34.068465 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.068505 kubelet[2659]: W0710 23:34:34.068499 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.068703 kubelet[2659]: E0710 23:34:34.068522 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.068797 kubelet[2659]: E0710 23:34:34.068778 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.068843 kubelet[2659]: W0710 23:34:34.068793 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.068843 kubelet[2659]: E0710 23:34:34.068835 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.069035 kubelet[2659]: E0710 23:34:34.069022 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.069079 kubelet[2659]: W0710 23:34:34.069066 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.069114 kubelet[2659]: E0710 23:34:34.069081 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.069326 kubelet[2659]: E0710 23:34:34.069300 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.069360 kubelet[2659]: W0710 23:34:34.069327 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.069393 kubelet[2659]: E0710 23:34:34.069377 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.069819 kubelet[2659]: E0710 23:34:34.069802 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.069863 kubelet[2659]: W0710 23:34:34.069820 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.069863 kubelet[2659]: E0710 23:34:34.069833 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.070057 kubelet[2659]: E0710 23:34:34.070044 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.070099 kubelet[2659]: W0710 23:34:34.070057 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.070099 kubelet[2659]: E0710 23:34:34.070067 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.070393 kubelet[2659]: E0710 23:34:34.070377 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.070506 kubelet[2659]: W0710 23:34:34.070389 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.070506 kubelet[2659]: E0710 23:34:34.070424 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.070810 kubelet[2659]: E0710 23:34:34.070628 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.070810 kubelet[2659]: W0710 23:34:34.070658 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.070810 kubelet[2659]: E0710 23:34:34.070668 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.070931 kubelet[2659]: E0710 23:34:34.070856 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.070931 kubelet[2659]: W0710 23:34:34.070865 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.070931 kubelet[2659]: E0710 23:34:34.070875 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.071095 kubelet[2659]: E0710 23:34:34.071057 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.071095 kubelet[2659]: W0710 23:34:34.071066 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.071095 kubelet[2659]: E0710 23:34:34.071076 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.071228 kubelet[2659]: E0710 23:34:34.071206 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.071228 kubelet[2659]: W0710 23:34:34.071218 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.071228 kubelet[2659]: E0710 23:34:34.071226 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.071413 kubelet[2659]: E0710 23:34:34.071399 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.071413 kubelet[2659]: W0710 23:34:34.071411 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.071473 kubelet[2659]: E0710 23:34:34.071420 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.071583 kubelet[2659]: E0710 23:34:34.071569 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.071583 kubelet[2659]: W0710 23:34:34.071579 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.071671 kubelet[2659]: E0710 23:34:34.071588 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.071791 kubelet[2659]: E0710 23:34:34.071769 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.071791 kubelet[2659]: W0710 23:34:34.071779 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.071791 kubelet[2659]: E0710 23:34:34.071788 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.071953 kubelet[2659]: E0710 23:34:34.071938 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.071953 kubelet[2659]: W0710 23:34:34.071950 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.072011 kubelet[2659]: E0710 23:34:34.071959 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.089653 kubelet[2659]: E0710 23:34:34.089598 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.089653 kubelet[2659]: W0710 23:34:34.089622 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.089812 kubelet[2659]: E0710 23:34:34.089672 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.089983 kubelet[2659]: E0710 23:34:34.089964 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.089983 kubelet[2659]: W0710 23:34:34.089980 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.090071 kubelet[2659]: E0710 23:34:34.089995 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.090359 kubelet[2659]: E0710 23:34:34.090282 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.090359 kubelet[2659]: W0710 23:34:34.090300 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.090359 kubelet[2659]: E0710 23:34:34.090313 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.090514 kubelet[2659]: E0710 23:34:34.090456 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.090514 kubelet[2659]: W0710 23:34:34.090463 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.090514 kubelet[2659]: E0710 23:34:34.090471 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.090610 containerd[1506]: time="2025-07-10T23:34:34.090410721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:34.090957 kubelet[2659]: E0710 23:34:34.090618 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.090957 kubelet[2659]: W0710 23:34:34.090625 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.090957 kubelet[2659]: E0710 23:34:34.090643 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.090957 kubelet[2659]: E0710 23:34:34.090805 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.090957 kubelet[2659]: W0710 23:34:34.090813 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.090957 kubelet[2659]: E0710 23:34:34.090822 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.091241 kubelet[2659]: E0710 23:34:34.091207 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.091270 kubelet[2659]: W0710 23:34:34.091239 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.091270 kubelet[2659]: E0710 23:34:34.091252 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.091551 kubelet[2659]: E0710 23:34:34.091536 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.091580 kubelet[2659]: W0710 23:34:34.091550 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.091580 kubelet[2659]: E0710 23:34:34.091560 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.091740 kubelet[2659]: E0710 23:34:34.091726 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.091740 kubelet[2659]: W0710 23:34:34.091739 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.091800 kubelet[2659]: E0710 23:34:34.091749 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.091939 kubelet[2659]: E0710 23:34:34.091924 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.091970 kubelet[2659]: W0710 23:34:34.091938 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.091970 kubelet[2659]: E0710 23:34:34.091951 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.092146 kubelet[2659]: E0710 23:34:34.092133 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.092146 kubelet[2659]: W0710 23:34:34.092144 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.092217 kubelet[2659]: E0710 23:34:34.092152 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.092316 kubelet[2659]: E0710 23:34:34.092304 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.092316 kubelet[2659]: W0710 23:34:34.092314 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.092530 kubelet[2659]: E0710 23:34:34.092322 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.092530 kubelet[2659]: E0710 23:34:34.092463 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.092530 kubelet[2659]: W0710 23:34:34.092471 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.092530 kubelet[2659]: E0710 23:34:34.092479 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.092680 kubelet[2659]: E0710 23:34:34.092669 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.092680 kubelet[2659]: W0710 23:34:34.092679 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.092761 kubelet[2659]: E0710 23:34:34.092687 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.092857 kubelet[2659]: E0710 23:34:34.092846 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.092887 kubelet[2659]: W0710 23:34:34.092858 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.092887 kubelet[2659]: E0710 23:34:34.092868 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.093039 kubelet[2659]: E0710 23:34:34.093028 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.093072 kubelet[2659]: W0710 23:34:34.093039 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.093072 kubelet[2659]: E0710 23:34:34.093049 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.093446 kubelet[2659]: E0710 23:34:34.093423 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.093475 kubelet[2659]: W0710 23:34:34.093446 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.093511 kubelet[2659]: E0710 23:34:34.093461 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.093909 kubelet[2659]: E0710 23:34:34.093884 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 23:34:34.093959 kubelet[2659]: W0710 23:34:34.093918 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 23:34:34.093959 kubelet[2659]: E0710 23:34:34.093936 2659 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 23:34:34.094025 containerd[1506]: time="2025-07-10T23:34:34.093904668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 10 23:34:34.098678 containerd[1506]: time="2025-07-10T23:34:34.098518377Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:34.103930 containerd[1506]: time="2025-07-10T23:34:34.103875794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:34.104889 containerd[1506]: time="2025-07-10T23:34:34.104847415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.383762322s" Jul 10 23:34:34.104889 containerd[1506]: time="2025-07-10T23:34:34.104881980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 10 23:34:34.117522 containerd[1506]: time="2025-07-10T23:34:34.117478888Z" level=info msg="CreateContainer within sandbox \"874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 23:34:34.163820 containerd[1506]: time="2025-07-10T23:34:34.162803464Z" level=info msg="Container 8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:34.164977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085993152.mount: Deactivated successfully. Jul 10 23:34:34.199603 containerd[1506]: time="2025-07-10T23:34:34.199444139Z" level=info msg="CreateContainer within sandbox \"874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38\"" Jul 10 23:34:34.200591 containerd[1506]: time="2025-07-10T23:34:34.200549660Z" level=info msg="StartContainer for \"8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38\"" Jul 10 23:34:34.204759 containerd[1506]: time="2025-07-10T23:34:34.202249506Z" level=info msg="connecting to shim 8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38" address="unix:///run/containerd/s/6195371bd501ef1abe33785e337d07f9bab31837235127083657c7e33160d452" protocol=ttrpc version=3 Jul 10 23:34:34.241891 systemd[1]: Started cri-containerd-8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38.scope - libcontainer container 8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38. Jul 10 23:34:34.335055 containerd[1506]: time="2025-07-10T23:34:34.335011047Z" level=info msg="StartContainer for \"8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38\" returns successfully" Jul 10 23:34:34.459346 systemd[1]: cri-containerd-8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38.scope: Deactivated successfully. Jul 10 23:34:34.506096 containerd[1506]: time="2025-07-10T23:34:34.506019057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38\" id:\"8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38\" pid:3367 exited_at:{seconds:1752190474 nanos:470838433}" Jul 10 23:34:34.506096 containerd[1506]: time="2025-07-10T23:34:34.506024018Z" level=info msg="received exit event container_id:\"8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38\" id:\"8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38\" pid:3367 exited_at:{seconds:1752190474 nanos:470838433}" Jul 10 23:34:34.560215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bea808985003463cb149b767f8cc791302694e0ac51d98ffc4e192d8d8bad38-rootfs.mount: Deactivated successfully. Jul 10 23:34:34.982838 kubelet[2659]: I0710 23:34:34.982794 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:34.984217 containerd[1506]: time="2025-07-10T23:34:34.983850221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 23:34:34.984985 kubelet[2659]: E0710 23:34:34.984237 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:35.001611 kubelet[2659]: I0710 23:34:35.000621 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-696599bbd7-wldt2" podStartSLOduration=4.094554845 podStartE2EDuration="6.000602211s" podCreationTimestamp="2025-07-10 23:34:29 +0000 UTC" firstStartedPulling="2025-07-10 23:34:30.814044411 +0000 UTC m=+22.023493916" lastFinishedPulling="2025-07-10 23:34:32.720091777 +0000 UTC m=+23.929541282" observedRunningTime="2025-07-10 23:34:34.001087762 +0000 UTC m=+25.210537267" watchObservedRunningTime="2025-07-10 23:34:35.000602211 +0000 UTC m=+26.210051716" Jul 10 23:34:35.901433 kubelet[2659]: E0710 23:34:35.898691 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx5zr" podUID="6e5a2663-9022-49fc-bebc-a43168cdc7dc" Jul 10 23:34:36.611172 containerd[1506]: time="2025-07-10T23:34:36.611119525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:36.611603 containerd[1506]: time="2025-07-10T23:34:36.611584587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 10 23:34:36.613124 containerd[1506]: time="2025-07-10T23:34:36.612416938Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:36.619439 containerd[1506]: time="2025-07-10T23:34:36.619394834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:36.620096 containerd[1506]: time="2025-07-10T23:34:36.620063243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.636179057s" Jul 10 23:34:36.620190 containerd[1506]: time="2025-07-10T23:34:36.620174618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 10 23:34:36.626678 containerd[1506]: time="2025-07-10T23:34:36.626622442Z" level=info msg="CreateContainer within sandbox \"874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 23:34:36.638670 containerd[1506]: time="2025-07-10T23:34:36.637946920Z" level=info msg="Container a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:36.649071 containerd[1506]: time="2025-07-10T23:34:36.649014923Z" level=info msg="CreateContainer within sandbox \"874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3\"" Jul 10 23:34:36.650919 containerd[1506]: time="2025-07-10T23:34:36.649784426Z" level=info msg="StartContainer for \"a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3\"" Jul 10 23:34:36.651608 containerd[1506]: time="2025-07-10T23:34:36.651562745Z" level=info msg="connecting to shim a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3" address="unix:///run/containerd/s/6195371bd501ef1abe33785e337d07f9bab31837235127083657c7e33160d452" protocol=ttrpc version=3 Jul 10 23:34:36.674800 systemd[1]: Started cri-containerd-a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3.scope - libcontainer container a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3. Jul 10 23:34:36.715279 containerd[1506]: time="2025-07-10T23:34:36.715244479Z" level=info msg="StartContainer for \"a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3\" returns successfully" Jul 10 23:34:37.285029 systemd[1]: cri-containerd-a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3.scope: Deactivated successfully. Jul 10 23:34:37.285332 systemd[1]: cri-containerd-a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3.scope: Consumed 492ms CPU time, 173.9M memory peak, 2.9M read from disk, 165.8M written to disk. Jul 10 23:34:37.289161 containerd[1506]: time="2025-07-10T23:34:37.289100785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3\" id:\"a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3\" pid:3427 exited_at:{seconds:1752190477 nanos:287858905}" Jul 10 23:34:37.296333 containerd[1506]: time="2025-07-10T23:34:37.296286232Z" level=info msg="received exit event container_id:\"a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3\" id:\"a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3\" pid:3427 exited_at:{seconds:1752190477 nanos:287858905}" Jul 10 23:34:37.313701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a27d45ae636da76b39cd68ba43fde189bca70c5924751c547e8c542f146bd2c3-rootfs.mount: Deactivated successfully. Jul 10 23:34:37.330853 kubelet[2659]: I0710 23:34:37.330825 2659 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 23:34:37.447261 systemd[1]: Created slice kubepods-burstable-pod484e1c2d_e63d_4aa0_89fe_b0044406bec6.slice - libcontainer container kubepods-burstable-pod484e1c2d_e63d_4aa0_89fe_b0044406bec6.slice. Jul 10 23:34:37.458157 systemd[1]: Created slice kubepods-burstable-pod51f1e356_7683_46d3_a698_c1e84639d1c3.slice - libcontainer container kubepods-burstable-pod51f1e356_7683_46d3_a698_c1e84639d1c3.slice. Jul 10 23:34:37.465762 systemd[1]: Created slice kubepods-besteffort-pod858fb430_f79b_4666_9fc3_06b2bf0c2228.slice - libcontainer container kubepods-besteffort-pod858fb430_f79b_4666_9fc3_06b2bf0c2228.slice. Jul 10 23:34:37.479025 systemd[1]: Created slice kubepods-besteffort-pod1aa67573_c049_4a50_9fad_b9f78cda01f2.slice - libcontainer container kubepods-besteffort-pod1aa67573_c049_4a50_9fad_b9f78cda01f2.slice. Jul 10 23:34:37.489158 systemd[1]: Created slice kubepods-besteffort-pod10e61f78_d593_4fa1_ae09_fcdfd9ac29bc.slice - libcontainer container kubepods-besteffort-pod10e61f78_d593_4fa1_ae09_fcdfd9ac29bc.slice. Jul 10 23:34:37.494182 systemd[1]: Created slice kubepods-besteffort-pod303da94b_4882_4812_800c_56f0e558072c.slice - libcontainer container kubepods-besteffort-pod303da94b_4882_4812_800c_56f0e558072c.slice. Jul 10 23:34:37.500398 systemd[1]: Created slice kubepods-besteffort-podb4725a17_8a37_4480_8b8c_9c73b12960d1.slice - libcontainer container kubepods-besteffort-podb4725a17_8a37_4480_8b8c_9c73b12960d1.slice. Jul 10 23:34:37.507038 systemd[1]: Created slice kubepods-besteffort-podabd1aeee_e722_4a5a_b095_3a9d610562b3.slice - libcontainer container kubepods-besteffort-podabd1aeee_e722_4a5a_b095_3a9d610562b3.slice. Jul 10 23:34:37.520544 kubelet[2659]: I0710 23:34:37.519621 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp4vw\" (UniqueName: \"kubernetes.io/projected/b4725a17-8a37-4480-8b8c-9c73b12960d1-kube-api-access-sp4vw\") pod \"calico-apiserver-7f7c4b7586-c9rnq\" (UID: \"b4725a17-8a37-4480-8b8c-9c73b12960d1\") " pod="calico-apiserver/calico-apiserver-7f7c4b7586-c9rnq" Jul 10 23:34:37.520544 kubelet[2659]: I0710 23:34:37.520569 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd1aeee-e722-4a5a-b095-3a9d610562b3-whisker-ca-bundle\") pod \"whisker-64bf6fb7b5-kkfk4\" (UID: \"abd1aeee-e722-4a5a-b095-3a9d610562b3\") " pod="calico-system/whisker-64bf6fb7b5-kkfk4" Jul 10 23:34:37.520544 kubelet[2659]: I0710 23:34:37.520591 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jld9\" (UniqueName: \"kubernetes.io/projected/abd1aeee-e722-4a5a-b095-3a9d610562b3-kube-api-access-2jld9\") pod \"whisker-64bf6fb7b5-kkfk4\" (UID: \"abd1aeee-e722-4a5a-b095-3a9d610562b3\") " pod="calico-system/whisker-64bf6fb7b5-kkfk4" Jul 10 23:34:37.520544 kubelet[2659]: I0710 23:34:37.520613 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2kl\" (UniqueName: \"kubernetes.io/projected/484e1c2d-e63d-4aa0-89fe-b0044406bec6-kube-api-access-ld2kl\") pod \"coredns-674b8bbfcf-ltc94\" (UID: \"484e1c2d-e63d-4aa0-89fe-b0044406bec6\") " pod="kube-system/coredns-674b8bbfcf-ltc94" Jul 10 23:34:37.520544 kubelet[2659]: I0710 23:34:37.520630 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4725a17-8a37-4480-8b8c-9c73b12960d1-calico-apiserver-certs\") pod \"calico-apiserver-7f7c4b7586-c9rnq\" (UID: \"b4725a17-8a37-4480-8b8c-9c73b12960d1\") " pod="calico-apiserver/calico-apiserver-7f7c4b7586-c9rnq" Jul 10 23:34:37.521486 kubelet[2659]: I0710 23:34:37.521296 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/484e1c2d-e63d-4aa0-89fe-b0044406bec6-config-volume\") pod \"coredns-674b8bbfcf-ltc94\" (UID: \"484e1c2d-e63d-4aa0-89fe-b0044406bec6\") " pod="kube-system/coredns-674b8bbfcf-ltc94" Jul 10 23:34:37.521612 kubelet[2659]: I0710 23:34:37.521589 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10e61f78-d593-4fa1-ae09-fcdfd9ac29bc-calico-apiserver-certs\") pod \"calico-apiserver-7bcf67d85c-7lnr7\" (UID: \"10e61f78-d593-4fa1-ae09-fcdfd9ac29bc\") " pod="calico-apiserver/calico-apiserver-7bcf67d85c-7lnr7" Jul 10 23:34:37.521725 kubelet[2659]: I0710 23:34:37.521712 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmdj\" (UniqueName: \"kubernetes.io/projected/10e61f78-d593-4fa1-ae09-fcdfd9ac29bc-kube-api-access-lfmdj\") pod \"calico-apiserver-7bcf67d85c-7lnr7\" (UID: \"10e61f78-d593-4fa1-ae09-fcdfd9ac29bc\") " pod="calico-apiserver/calico-apiserver-7bcf67d85c-7lnr7" Jul 10 23:34:37.521849 kubelet[2659]: I0710 23:34:37.521780 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1aa67573-c049-4a50-9fad-b9f78cda01f2-calico-apiserver-certs\") pod \"calico-apiserver-7bcf67d85c-cg7hx\" (UID: \"1aa67573-c049-4a50-9fad-b9f78cda01f2\") " pod="calico-apiserver/calico-apiserver-7bcf67d85c-cg7hx" Jul 10 23:34:37.521849 kubelet[2659]: I0710 23:34:37.521803 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5hqk\" (UniqueName: \"kubernetes.io/projected/858fb430-f79b-4666-9fc3-06b2bf0c2228-kube-api-access-m5hqk\") pod \"calico-kube-controllers-5444b768d7-9ffjl\" (UID: \"858fb430-f79b-4666-9fc3-06b2bf0c2228\") " pod="calico-system/calico-kube-controllers-5444b768d7-9ffjl" Jul 10 23:34:37.521933 kubelet[2659]: I0710 23:34:37.521922 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/858fb430-f79b-4666-9fc3-06b2bf0c2228-tigera-ca-bundle\") pod \"calico-kube-controllers-5444b768d7-9ffjl\" (UID: \"858fb430-f79b-4666-9fc3-06b2bf0c2228\") " pod="calico-system/calico-kube-controllers-5444b768d7-9ffjl" Jul 10 23:34:37.522019 kubelet[2659]: I0710 23:34:37.522007 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtmtc\" (UniqueName: \"kubernetes.io/projected/51f1e356-7683-46d3-a698-c1e84639d1c3-kube-api-access-xtmtc\") pod \"coredns-674b8bbfcf-s8kb7\" (UID: \"51f1e356-7683-46d3-a698-c1e84639d1c3\") " pod="kube-system/coredns-674b8bbfcf-s8kb7" Jul 10 23:34:37.522141 kubelet[2659]: I0710 23:34:37.522087 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303da94b-4882-4812-800c-56f0e558072c-config\") pod \"goldmane-768f4c5c69-7q7p2\" (UID: \"303da94b-4882-4812-800c-56f0e558072c\") " pod="calico-system/goldmane-768f4c5c69-7q7p2" Jul 10 23:34:37.522141 kubelet[2659]: I0710 23:34:37.522108 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/303da94b-4882-4812-800c-56f0e558072c-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-7q7p2\" (UID: \"303da94b-4882-4812-800c-56f0e558072c\") " pod="calico-system/goldmane-768f4c5c69-7q7p2" Jul 10 23:34:37.522141 kubelet[2659]: I0710 23:34:37.522126 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/303da94b-4882-4812-800c-56f0e558072c-goldmane-key-pair\") pod \"goldmane-768f4c5c69-7q7p2\" (UID: \"303da94b-4882-4812-800c-56f0e558072c\") " pod="calico-system/goldmane-768f4c5c69-7q7p2" Jul 10 23:34:37.522318 kubelet[2659]: I0710 23:34:37.522259 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/abd1aeee-e722-4a5a-b095-3a9d610562b3-whisker-backend-key-pair\") pod \"whisker-64bf6fb7b5-kkfk4\" (UID: \"abd1aeee-e722-4a5a-b095-3a9d610562b3\") " pod="calico-system/whisker-64bf6fb7b5-kkfk4" Jul 10 23:34:37.522318 kubelet[2659]: I0710 23:34:37.522287 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51f1e356-7683-46d3-a698-c1e84639d1c3-config-volume\") pod \"coredns-674b8bbfcf-s8kb7\" (UID: \"51f1e356-7683-46d3-a698-c1e84639d1c3\") " pod="kube-system/coredns-674b8bbfcf-s8kb7" Jul 10 23:34:37.522409 kubelet[2659]: I0710 23:34:37.522302 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lw9d\" (UniqueName: \"kubernetes.io/projected/303da94b-4882-4812-800c-56f0e558072c-kube-api-access-9lw9d\") pod \"goldmane-768f4c5c69-7q7p2\" (UID: \"303da94b-4882-4812-800c-56f0e558072c\") " pod="calico-system/goldmane-768f4c5c69-7q7p2" Jul 10 23:34:37.522534 kubelet[2659]: I0710 23:34:37.522482 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7cjj\" (UniqueName: \"kubernetes.io/projected/1aa67573-c049-4a50-9fad-b9f78cda01f2-kube-api-access-q7cjj\") pod \"calico-apiserver-7bcf67d85c-cg7hx\" (UID: \"1aa67573-c049-4a50-9fad-b9f78cda01f2\") " pod="calico-apiserver/calico-apiserver-7bcf67d85c-cg7hx" Jul 10 23:34:37.753022 kubelet[2659]: E0710 23:34:37.752951 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:37.753988 containerd[1506]: time="2025-07-10T23:34:37.753738885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ltc94,Uid:484e1c2d-e63d-4aa0-89fe-b0044406bec6,Namespace:kube-system,Attempt:0,}" Jul 10 23:34:37.763020 kubelet[2659]: E0710 23:34:37.762985 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:37.776902 containerd[1506]: time="2025-07-10T23:34:37.773423785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8kb7,Uid:51f1e356-7683-46d3-a698-c1e84639d1c3,Namespace:kube-system,Attempt:0,}" Jul 10 23:34:37.784138 containerd[1506]: time="2025-07-10T23:34:37.779344868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5444b768d7-9ffjl,Uid:858fb430-f79b-4666-9fc3-06b2bf0c2228,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:37.799535 containerd[1506]: time="2025-07-10T23:34:37.797898862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf67d85c-cg7hx,Uid:1aa67573-c049-4a50-9fad-b9f78cda01f2,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:34:37.806503 containerd[1506]: time="2025-07-10T23:34:37.806262981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c4b7586-c9rnq,Uid:b4725a17-8a37-4480-8b8c-9c73b12960d1,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:34:37.806880 containerd[1506]: time="2025-07-10T23:34:37.806852377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf67d85c-7lnr7,Uid:10e61f78-d593-4fa1-ae09-fcdfd9ac29bc,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:34:37.807096 containerd[1506]: time="2025-07-10T23:34:37.807075246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7q7p2,Uid:303da94b-4882-4812-800c-56f0e558072c,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:37.812071 containerd[1506]: time="2025-07-10T23:34:37.812040086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64bf6fb7b5-kkfk4,Uid:abd1aeee-e722-4a5a-b095-3a9d610562b3,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:37.962322 systemd[1]: Created slice kubepods-besteffort-pod6e5a2663_9022_49fc_bebc_a43168cdc7dc.slice - libcontainer container kubepods-besteffort-pod6e5a2663_9022_49fc_bebc_a43168cdc7dc.slice. Jul 10 23:34:37.965247 containerd[1506]: time="2025-07-10T23:34:37.965131155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx5zr,Uid:6e5a2663-9022-49fc-bebc-a43168cdc7dc,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:38.020853 containerd[1506]: time="2025-07-10T23:34:38.020746372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 23:34:38.218238 containerd[1506]: time="2025-07-10T23:34:38.218074180Z" level=error msg="Failed to destroy network for sandbox \"d90903d51cf1d128d7fb08595cdf40f1c6886dcd67b274ac27f8c5d5e3daad69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.222150 containerd[1506]: time="2025-07-10T23:34:38.222094480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8kb7,Uid:51f1e356-7683-46d3-a698-c1e84639d1c3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d90903d51cf1d128d7fb08595cdf40f1c6886dcd67b274ac27f8c5d5e3daad69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.225793 kubelet[2659]: E0710 23:34:38.225733 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d90903d51cf1d128d7fb08595cdf40f1c6886dcd67b274ac27f8c5d5e3daad69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.225876 kubelet[2659]: E0710 23:34:38.225830 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d90903d51cf1d128d7fb08595cdf40f1c6886dcd67b274ac27f8c5d5e3daad69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s8kb7" Jul 10 23:34:38.225876 kubelet[2659]: E0710 23:34:38.225851 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d90903d51cf1d128d7fb08595cdf40f1c6886dcd67b274ac27f8c5d5e3daad69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s8kb7" Jul 10 23:34:38.225934 kubelet[2659]: E0710 23:34:38.225907 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-s8kb7_kube-system(51f1e356-7683-46d3-a698-c1e84639d1c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-s8kb7_kube-system(51f1e356-7683-46d3-a698-c1e84639d1c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d90903d51cf1d128d7fb08595cdf40f1c6886dcd67b274ac27f8c5d5e3daad69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-s8kb7" podUID="51f1e356-7683-46d3-a698-c1e84639d1c3" Jul 10 23:34:38.240906 containerd[1506]: time="2025-07-10T23:34:38.240840890Z" level=error msg="Failed to destroy network for sandbox \"5ce5b814e27d652519674b66bc3fe9e06ad56c2b257f02a43546024bc150ed08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.244661 containerd[1506]: time="2025-07-10T23:34:38.244157982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx5zr,Uid:6e5a2663-9022-49fc-bebc-a43168cdc7dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ce5b814e27d652519674b66bc3fe9e06ad56c2b257f02a43546024bc150ed08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.244783 kubelet[2659]: E0710 23:34:38.244363 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ce5b814e27d652519674b66bc3fe9e06ad56c2b257f02a43546024bc150ed08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.244783 kubelet[2659]: E0710 23:34:38.244420 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ce5b814e27d652519674b66bc3fe9e06ad56c2b257f02a43546024bc150ed08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx5zr" Jul 10 23:34:38.244783 kubelet[2659]: E0710 23:34:38.244439 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ce5b814e27d652519674b66bc3fe9e06ad56c2b257f02a43546024bc150ed08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx5zr" Jul 10 23:34:38.244873 kubelet[2659]: E0710 23:34:38.244480 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rx5zr_calico-system(6e5a2663-9022-49fc-bebc-a43168cdc7dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rx5zr_calico-system(6e5a2663-9022-49fc-bebc-a43168cdc7dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ce5b814e27d652519674b66bc3fe9e06ad56c2b257f02a43546024bc150ed08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rx5zr" podUID="6e5a2663-9022-49fc-bebc-a43168cdc7dc" Jul 10 23:34:38.251218 containerd[1506]: time="2025-07-10T23:34:38.251176095Z" level=error msg="Failed to destroy network for sandbox \"d0c26cb46b19b574f6f75088c0271e1ec76b555fe5d7a827b6383a9a3afa8ee6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.255501 containerd[1506]: time="2025-07-10T23:34:38.255455066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf67d85c-7lnr7,Uid:10e61f78-d593-4fa1-ae09-fcdfd9ac29bc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c26cb46b19b574f6f75088c0271e1ec76b555fe5d7a827b6383a9a3afa8ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.255949 kubelet[2659]: E0710 23:34:38.255890 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c26cb46b19b574f6f75088c0271e1ec76b555fe5d7a827b6383a9a3afa8ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.256011 kubelet[2659]: E0710 23:34:38.255985 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c26cb46b19b574f6f75088c0271e1ec76b555fe5d7a827b6383a9a3afa8ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcf67d85c-7lnr7" Jul 10 23:34:38.256039 kubelet[2659]: E0710 23:34:38.256018 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c26cb46b19b574f6f75088c0271e1ec76b555fe5d7a827b6383a9a3afa8ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcf67d85c-7lnr7" Jul 10 23:34:38.256110 kubelet[2659]: E0710 23:34:38.256086 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bcf67d85c-7lnr7_calico-apiserver(10e61f78-d593-4fa1-ae09-fcdfd9ac29bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bcf67d85c-7lnr7_calico-apiserver(10e61f78-d593-4fa1-ae09-fcdfd9ac29bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0c26cb46b19b574f6f75088c0271e1ec76b555fe5d7a827b6383a9a3afa8ee6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bcf67d85c-7lnr7" podUID="10e61f78-d593-4fa1-ae09-fcdfd9ac29bc" Jul 10 23:34:38.259513 containerd[1506]: time="2025-07-10T23:34:38.259461765Z" level=error msg="Failed to destroy network for sandbox \"d3bb1dc1e00808d7d1d42e6f141f1a8ae1abf34e4eff020323bc35cc5104d35c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.262425 containerd[1506]: time="2025-07-10T23:34:38.262350724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64bf6fb7b5-kkfk4,Uid:abd1aeee-e722-4a5a-b095-3a9d610562b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3bb1dc1e00808d7d1d42e6f141f1a8ae1abf34e4eff020323bc35cc5104d35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.262570 kubelet[2659]: E0710 23:34:38.262536 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3bb1dc1e00808d7d1d42e6f141f1a8ae1abf34e4eff020323bc35cc5104d35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.262623 kubelet[2659]: E0710 23:34:38.262587 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3bb1dc1e00808d7d1d42e6f141f1a8ae1abf34e4eff020323bc35cc5104d35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64bf6fb7b5-kkfk4" Jul 10 23:34:38.262623 kubelet[2659]: E0710 23:34:38.262613 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3bb1dc1e00808d7d1d42e6f141f1a8ae1abf34e4eff020323bc35cc5104d35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64bf6fb7b5-kkfk4" Jul 10 23:34:38.262726 kubelet[2659]: E0710 23:34:38.262676 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64bf6fb7b5-kkfk4_calico-system(abd1aeee-e722-4a5a-b095-3a9d610562b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64bf6fb7b5-kkfk4_calico-system(abd1aeee-e722-4a5a-b095-3a9d610562b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3bb1dc1e00808d7d1d42e6f141f1a8ae1abf34e4eff020323bc35cc5104d35c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64bf6fb7b5-kkfk4" podUID="abd1aeee-e722-4a5a-b095-3a9d610562b3" Jul 10 23:34:38.263958 containerd[1506]: time="2025-07-10T23:34:38.263916878Z" level=error msg="Failed to destroy network for sandbox \"76e3c3b892a05873d4c88a65f303eadd877bce52a096d0b35058488f7f2e48b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.264388 containerd[1506]: time="2025-07-10T23:34:38.264355853Z" level=error msg="Failed to destroy network for sandbox \"6c55e38af4b4c2c3333a68b7776e20ba043fa1a107a88d9b75806a712e491e63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.265111 containerd[1506]: time="2025-07-10T23:34:38.265066141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5444b768d7-9ffjl,Uid:858fb430-f79b-4666-9fc3-06b2bf0c2228,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e3c3b892a05873d4c88a65f303eadd877bce52a096d0b35058488f7f2e48b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.265365 kubelet[2659]: E0710 23:34:38.265323 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e3c3b892a05873d4c88a65f303eadd877bce52a096d0b35058488f7f2e48b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.265423 kubelet[2659]: E0710 23:34:38.265381 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e3c3b892a05873d4c88a65f303eadd877bce52a096d0b35058488f7f2e48b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5444b768d7-9ffjl" Jul 10 23:34:38.265423 kubelet[2659]: E0710 23:34:38.265400 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e3c3b892a05873d4c88a65f303eadd877bce52a096d0b35058488f7f2e48b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5444b768d7-9ffjl" Jul 10 23:34:38.265467 kubelet[2659]: E0710 23:34:38.265433 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5444b768d7-9ffjl_calico-system(858fb430-f79b-4666-9fc3-06b2bf0c2228)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5444b768d7-9ffjl_calico-system(858fb430-f79b-4666-9fc3-06b2bf0c2228)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76e3c3b892a05873d4c88a65f303eadd877bce52a096d0b35058488f7f2e48b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5444b768d7-9ffjl" podUID="858fb430-f79b-4666-9fc3-06b2bf0c2228" Jul 10 23:34:38.265892 containerd[1506]: time="2025-07-10T23:34:38.265861400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ltc94,Uid:484e1c2d-e63d-4aa0-89fe-b0044406bec6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c55e38af4b4c2c3333a68b7776e20ba043fa1a107a88d9b75806a712e491e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.266157 kubelet[2659]: E0710 23:34:38.266022 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c55e38af4b4c2c3333a68b7776e20ba043fa1a107a88d9b75806a712e491e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.266200 kubelet[2659]: E0710 23:34:38.266170 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c55e38af4b4c2c3333a68b7776e20ba043fa1a107a88d9b75806a712e491e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ltc94" Jul 10 23:34:38.266229 kubelet[2659]: E0710 23:34:38.266216 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c55e38af4b4c2c3333a68b7776e20ba043fa1a107a88d9b75806a712e491e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ltc94" Jul 10 23:34:38.266283 kubelet[2659]: E0710 23:34:38.266258 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ltc94_kube-system(484e1c2d-e63d-4aa0-89fe-b0044406bec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ltc94_kube-system(484e1c2d-e63d-4aa0-89fe-b0044406bec6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c55e38af4b4c2c3333a68b7776e20ba043fa1a107a88d9b75806a712e491e63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ltc94" podUID="484e1c2d-e63d-4aa0-89fe-b0044406bec6" Jul 10 23:34:38.268102 containerd[1506]: time="2025-07-10T23:34:38.268013267Z" level=error msg="Failed to destroy network for sandbox \"6ae6515300d463e5c1a6d49d8c81efa038258ee250059185f71899d929a537f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.268448 containerd[1506]: time="2025-07-10T23:34:38.268412237Z" level=error msg="Failed to destroy network for sandbox \"95130df9b7d9c3e7072a25e89598f6b3993ca72e48e27fe44c905186276291e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.269449 containerd[1506]: time="2025-07-10T23:34:38.268938222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf67d85c-cg7hx,Uid:1aa67573-c049-4a50-9fad-b9f78cda01f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae6515300d463e5c1a6d49d8c81efa038258ee250059185f71899d929a537f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.269910 kubelet[2659]: E0710 23:34:38.269841 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae6515300d463e5c1a6d49d8c81efa038258ee250059185f71899d929a537f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.269967 kubelet[2659]: E0710 23:34:38.269914 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae6515300d463e5c1a6d49d8c81efa038258ee250059185f71899d929a537f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcf67d85c-cg7hx" Jul 10 23:34:38.269967 kubelet[2659]: E0710 23:34:38.269934 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae6515300d463e5c1a6d49d8c81efa038258ee250059185f71899d929a537f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcf67d85c-cg7hx" Jul 10 23:34:38.270018 kubelet[2659]: E0710 23:34:38.269975 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bcf67d85c-cg7hx_calico-apiserver(1aa67573-c049-4a50-9fad-b9f78cda01f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bcf67d85c-cg7hx_calico-apiserver(1aa67573-c049-4a50-9fad-b9f78cda01f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ae6515300d463e5c1a6d49d8c81efa038258ee250059185f71899d929a537f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bcf67d85c-cg7hx" podUID="1aa67573-c049-4a50-9fad-b9f78cda01f2" Jul 10 23:34:38.270541 containerd[1506]: time="2025-07-10T23:34:38.270479854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7q7p2,Uid:303da94b-4882-4812-800c-56f0e558072c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95130df9b7d9c3e7072a25e89598f6b3993ca72e48e27fe44c905186276291e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.270829 kubelet[2659]: E0710 23:34:38.270719 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95130df9b7d9c3e7072a25e89598f6b3993ca72e48e27fe44c905186276291e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.276316 kubelet[2659]: E0710 23:34:38.274597 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95130df9b7d9c3e7072a25e89598f6b3993ca72e48e27fe44c905186276291e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-7q7p2" Jul 10 23:34:38.276316 kubelet[2659]: E0710 23:34:38.275691 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95130df9b7d9c3e7072a25e89598f6b3993ca72e48e27fe44c905186276291e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-7q7p2" Jul 10 23:34:38.276316 kubelet[2659]: E0710 23:34:38.275769 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-7q7p2_calico-system(303da94b-4882-4812-800c-56f0e558072c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-7q7p2_calico-system(303da94b-4882-4812-800c-56f0e558072c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95130df9b7d9c3e7072a25e89598f6b3993ca72e48e27fe44c905186276291e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-7q7p2" podUID="303da94b-4882-4812-800c-56f0e558072c" Jul 10 23:34:38.276473 containerd[1506]: time="2025-07-10T23:34:38.275339858Z" level=error msg="Failed to destroy network for sandbox \"c0b08793d1047bfeea7d5d7d090066954a5eb4f3da17973c1297f6ee550e85db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.278209 containerd[1506]: time="2025-07-10T23:34:38.278173530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c4b7586-c9rnq,Uid:b4725a17-8a37-4480-8b8c-9c73b12960d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b08793d1047bfeea7d5d7d090066954a5eb4f3da17973c1297f6ee550e85db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.278529 kubelet[2659]: E0710 23:34:38.278503 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b08793d1047bfeea7d5d7d090066954a5eb4f3da17973c1297f6ee550e85db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 23:34:38.278670 kubelet[2659]: E0710 23:34:38.278608 2659 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b08793d1047bfeea7d5d7d090066954a5eb4f3da17973c1297f6ee550e85db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f7c4b7586-c9rnq" Jul 10 23:34:38.278735 kubelet[2659]: E0710 23:34:38.278628 2659 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b08793d1047bfeea7d5d7d090066954a5eb4f3da17973c1297f6ee550e85db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f7c4b7586-c9rnq" Jul 10 23:34:38.278836 kubelet[2659]: E0710 23:34:38.278811 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f7c4b7586-c9rnq_calico-apiserver(b4725a17-8a37-4480-8b8c-9c73b12960d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f7c4b7586-c9rnq_calico-apiserver(b4725a17-8a37-4480-8b8c-9c73b12960d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0b08793d1047bfeea7d5d7d090066954a5eb4f3da17973c1297f6ee550e85db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f7c4b7586-c9rnq" podUID="b4725a17-8a37-4480-8b8c-9c73b12960d1" Jul 10 23:34:41.066161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217299902.mount: Deactivated successfully. Jul 10 23:34:41.251091 containerd[1506]: time="2025-07-10T23:34:41.250912104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 10 23:34:41.258690 containerd[1506]: time="2025-07-10T23:34:41.258279448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:41.273031 containerd[1506]: time="2025-07-10T23:34:41.272982173Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:41.273753 containerd[1506]: time="2025-07-10T23:34:41.273702413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.252906915s" Jul 10 23:34:41.273753 containerd[1506]: time="2025-07-10T23:34:41.273748379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 10 23:34:41.275141 containerd[1506]: time="2025-07-10T23:34:41.275045444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:41.300433 containerd[1506]: time="2025-07-10T23:34:41.300383038Z" level=info msg="CreateContainer within sandbox \"874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 23:34:41.332882 containerd[1506]: time="2025-07-10T23:34:41.332776943Z" level=info msg="Container 1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:41.349392 containerd[1506]: time="2025-07-10T23:34:41.349334235Z" level=info msg="CreateContainer within sandbox \"874d6e5fe5a8bda8f5221e545faf7a1d92f2f59c674a90c9865591a2df29dc2a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6\"" Jul 10 23:34:41.352713 containerd[1506]: time="2025-07-10T23:34:41.351468714Z" level=info msg="StartContainer for \"1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6\"" Jul 10 23:34:41.353336 containerd[1506]: time="2025-07-10T23:34:41.353309280Z" level=info msg="connecting to shim 1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6" address="unix:///run/containerd/s/6195371bd501ef1abe33785e337d07f9bab31837235127083657c7e33160d452" protocol=ttrpc version=3 Jul 10 23:34:41.381850 systemd[1]: Started cri-containerd-1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6.scope - libcontainer container 1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6. Jul 10 23:34:41.521164 containerd[1506]: time="2025-07-10T23:34:41.521044646Z" level=info msg="StartContainer for \"1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6\" returns successfully" Jul 10 23:34:41.736435 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 23:34:41.736573 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 23:34:41.962812 kubelet[2659]: I0710 23:34:41.962766 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/abd1aeee-e722-4a5a-b095-3a9d610562b3-whisker-backend-key-pair\") pod \"abd1aeee-e722-4a5a-b095-3a9d610562b3\" (UID: \"abd1aeee-e722-4a5a-b095-3a9d610562b3\") " Jul 10 23:34:41.962812 kubelet[2659]: I0710 23:34:41.962826 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jld9\" (UniqueName: \"kubernetes.io/projected/abd1aeee-e722-4a5a-b095-3a9d610562b3-kube-api-access-2jld9\") pod \"abd1aeee-e722-4a5a-b095-3a9d610562b3\" (UID: \"abd1aeee-e722-4a5a-b095-3a9d610562b3\") " Jul 10 23:34:41.963254 kubelet[2659]: I0710 23:34:41.962866 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd1aeee-e722-4a5a-b095-3a9d610562b3-whisker-ca-bundle\") pod \"abd1aeee-e722-4a5a-b095-3a9d610562b3\" (UID: \"abd1aeee-e722-4a5a-b095-3a9d610562b3\") " Jul 10 23:34:41.971667 kubelet[2659]: I0710 23:34:41.971357 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd1aeee-e722-4a5a-b095-3a9d610562b3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "abd1aeee-e722-4a5a-b095-3a9d610562b3" (UID: "abd1aeee-e722-4a5a-b095-3a9d610562b3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:34:41.972367 kubelet[2659]: I0710 23:34:41.972321 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd1aeee-e722-4a5a-b095-3a9d610562b3-kube-api-access-2jld9" (OuterVolumeSpecName: "kube-api-access-2jld9") pod "abd1aeee-e722-4a5a-b095-3a9d610562b3" (UID: "abd1aeee-e722-4a5a-b095-3a9d610562b3"). InnerVolumeSpecName "kube-api-access-2jld9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:34:41.972709 kubelet[2659]: I0710 23:34:41.972664 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd1aeee-e722-4a5a-b095-3a9d610562b3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "abd1aeee-e722-4a5a-b095-3a9d610562b3" (UID: "abd1aeee-e722-4a5a-b095-3a9d610562b3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 23:34:42.061182 systemd[1]: Removed slice kubepods-besteffort-podabd1aeee_e722_4a5a_b095_3a9d610562b3.slice - libcontainer container kubepods-besteffort-podabd1aeee_e722_4a5a_b095_3a9d610562b3.slice. Jul 10 23:34:42.063195 kubelet[2659]: I0710 23:34:42.063139 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd1aeee-e722-4a5a-b095-3a9d610562b3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 23:34:42.063195 kubelet[2659]: I0710 23:34:42.063166 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/abd1aeee-e722-4a5a-b095-3a9d610562b3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 23:34:42.063195 kubelet[2659]: I0710 23:34:42.063175 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jld9\" (UniqueName: \"kubernetes.io/projected/abd1aeee-e722-4a5a-b095-3a9d610562b3-kube-api-access-2jld9\") on node \"localhost\" DevicePath \"\"" Jul 10 23:34:42.066188 systemd[1]: var-lib-kubelet-pods-abd1aeee\x2de722\x2d4a5a\x2db095\x2d3a9d610562b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jld9.mount: Deactivated successfully. Jul 10 23:34:42.066291 systemd[1]: var-lib-kubelet-pods-abd1aeee\x2de722\x2d4a5a\x2db095\x2d3a9d610562b3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 23:34:42.074771 kubelet[2659]: I0710 23:34:42.074530 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qnkps" podStartSLOduration=1.622402927 podStartE2EDuration="12.07451366s" podCreationTimestamp="2025-07-10 23:34:30 +0000 UTC" firstStartedPulling="2025-07-10 23:34:30.823101049 +0000 UTC m=+22.032550554" lastFinishedPulling="2025-07-10 23:34:41.275211782 +0000 UTC m=+32.484661287" observedRunningTime="2025-07-10 23:34:42.074338201 +0000 UTC m=+33.283787706" watchObservedRunningTime="2025-07-10 23:34:42.07451366 +0000 UTC m=+33.283963165" Jul 10 23:34:42.155296 systemd[1]: Created slice kubepods-besteffort-podb2cc0648_ced9_4ea9_8e51_b7ae4db3642e.slice - libcontainer container kubepods-besteffort-podb2cc0648_ced9_4ea9_8e51_b7ae4db3642e.slice. Jul 10 23:34:42.264524 kubelet[2659]: I0710 23:34:42.264395 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmqxn\" (UniqueName: \"kubernetes.io/projected/b2cc0648-ced9-4ea9-8e51-b7ae4db3642e-kube-api-access-hmqxn\") pod \"whisker-58cc5db9cf-tct8w\" (UID: \"b2cc0648-ced9-4ea9-8e51-b7ae4db3642e\") " pod="calico-system/whisker-58cc5db9cf-tct8w" Jul 10 23:34:42.264524 kubelet[2659]: I0710 23:34:42.264442 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b2cc0648-ced9-4ea9-8e51-b7ae4db3642e-whisker-backend-key-pair\") pod \"whisker-58cc5db9cf-tct8w\" (UID: \"b2cc0648-ced9-4ea9-8e51-b7ae4db3642e\") " pod="calico-system/whisker-58cc5db9cf-tct8w" Jul 10 23:34:42.264524 kubelet[2659]: I0710 23:34:42.264464 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2cc0648-ced9-4ea9-8e51-b7ae4db3642e-whisker-ca-bundle\") pod \"whisker-58cc5db9cf-tct8w\" (UID: \"b2cc0648-ced9-4ea9-8e51-b7ae4db3642e\") " pod="calico-system/whisker-58cc5db9cf-tct8w" Jul 10 23:34:42.454565 kubelet[2659]: I0710 23:34:42.453857 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:42.454565 kubelet[2659]: E0710 23:34:42.454221 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:42.459072 containerd[1506]: time="2025-07-10T23:34:42.459024962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58cc5db9cf-tct8w,Uid:b2cc0648-ced9-4ea9-8e51-b7ae4db3642e,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:42.749562 systemd-networkd[1440]: cali934ca3673aa: Link UP Jul 10 23:34:42.750736 systemd-networkd[1440]: cali934ca3673aa: Gained carrier Jul 10 23:34:42.766587 containerd[1506]: 2025-07-10 23:34:42.500 [INFO][3842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 23:34:42.766587 containerd[1506]: 2025-07-10 23:34:42.563 [INFO][3842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--58cc5db9cf--tct8w-eth0 whisker-58cc5db9cf- calico-system b2cc0648-ced9-4ea9-8e51-b7ae4db3642e 912 0 2025-07-10 23:34:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58cc5db9cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-58cc5db9cf-tct8w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali934ca3673aa [] [] }} ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Namespace="calico-system" Pod="whisker-58cc5db9cf-tct8w" WorkloadEndpoint="localhost-k8s-whisker--58cc5db9cf--tct8w-" Jul 10 23:34:42.766587 containerd[1506]: 2025-07-10 23:34:42.563 [INFO][3842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Namespace="calico-system" Pod="whisker-58cc5db9cf-tct8w" WorkloadEndpoint="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" Jul 10 23:34:42.766587 containerd[1506]: 2025-07-10 23:34:42.688 [INFO][3858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" HandleID="k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Workload="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.689 [INFO][3858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" HandleID="k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Workload="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033c3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-58cc5db9cf-tct8w", "timestamp":"2025-07-10 23:34:42.688899405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.689 [INFO][3858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.689 [INFO][3858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.689 [INFO][3858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.702 [INFO][3858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" host="localhost" Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.709 [INFO][3858] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.715 [INFO][3858] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.718 [INFO][3858] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.721 [INFO][3858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:42.766869 containerd[1506]: 2025-07-10 23:34:42.721 [INFO][3858] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" host="localhost" Jul 10 23:34:42.767063 containerd[1506]: 2025-07-10 23:34:42.723 [INFO][3858] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e Jul 10 23:34:42.767063 containerd[1506]: 2025-07-10 23:34:42.727 [INFO][3858] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" host="localhost" Jul 10 23:34:42.767063 containerd[1506]: 2025-07-10 23:34:42.734 [INFO][3858] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" host="localhost" Jul 10 23:34:42.767063 containerd[1506]: 2025-07-10 23:34:42.735 [INFO][3858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" host="localhost" Jul 10 23:34:42.767063 containerd[1506]: 2025-07-10 23:34:42.735 [INFO][3858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:42.767063 containerd[1506]: 2025-07-10 23:34:42.735 [INFO][3858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" HandleID="k8s-pod-network.d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Workload="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" Jul 10 23:34:42.767177 containerd[1506]: 2025-07-10 23:34:42.737 [INFO][3842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Namespace="calico-system" Pod="whisker-58cc5db9cf-tct8w" WorkloadEndpoint="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58cc5db9cf--tct8w-eth0", GenerateName:"whisker-58cc5db9cf-", Namespace:"calico-system", SelfLink:"", UID:"b2cc0648-ced9-4ea9-8e51-b7ae4db3642e", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58cc5db9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-58cc5db9cf-tct8w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali934ca3673aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:42.767177 containerd[1506]: 2025-07-10 23:34:42.738 [INFO][3842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Namespace="calico-system" Pod="whisker-58cc5db9cf-tct8w" WorkloadEndpoint="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" Jul 10 23:34:42.767244 containerd[1506]: 2025-07-10 23:34:42.738 [INFO][3842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali934ca3673aa ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Namespace="calico-system" Pod="whisker-58cc5db9cf-tct8w" WorkloadEndpoint="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" Jul 10 23:34:42.767244 containerd[1506]: 2025-07-10 23:34:42.750 [INFO][3842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Namespace="calico-system" Pod="whisker-58cc5db9cf-tct8w" WorkloadEndpoint="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" Jul 10 23:34:42.767285 containerd[1506]: 2025-07-10 23:34:42.751 [INFO][3842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Namespace="calico-system" Pod="whisker-58cc5db9cf-tct8w" WorkloadEndpoint="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58cc5db9cf--tct8w-eth0", GenerateName:"whisker-58cc5db9cf-", Namespace:"calico-system", SelfLink:"", UID:"b2cc0648-ced9-4ea9-8e51-b7ae4db3642e", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58cc5db9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e", Pod:"whisker-58cc5db9cf-tct8w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali934ca3673aa", MAC:"d2:2e:c7:84:e2:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:42.767332 containerd[1506]: 2025-07-10 23:34:42.764 [INFO][3842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" Namespace="calico-system" Pod="whisker-58cc5db9cf-tct8w" WorkloadEndpoint="localhost-k8s-whisker--58cc5db9cf--tct8w-eth0" Jul 10 23:34:42.850978 containerd[1506]: time="2025-07-10T23:34:42.850909502Z" level=info msg="connecting to shim d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e" address="unix:///run/containerd/s/fa9b4ea84aecd58fb71093d657612fb7399092cbbdf977d1319874d1d8ec93c1" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:42.879829 systemd[1]: Started cri-containerd-d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e.scope - libcontainer container d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e. Jul 10 23:34:42.891888 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:42.914450 kubelet[2659]: I0710 23:34:42.914381 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd1aeee-e722-4a5a-b095-3a9d610562b3" path="/var/lib/kubelet/pods/abd1aeee-e722-4a5a-b095-3a9d610562b3/volumes" Jul 10 23:34:42.925497 containerd[1506]: time="2025-07-10T23:34:42.925423688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58cc5db9cf-tct8w,Uid:b2cc0648-ced9-4ea9-8e51-b7ae4db3642e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e\"" Jul 10 23:34:42.931744 containerd[1506]: time="2025-07-10T23:34:42.930926524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 23:34:43.065456 kubelet[2659]: I0710 23:34:43.063061 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:43.065456 kubelet[2659]: E0710 23:34:43.063346 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:43.538423 systemd-networkd[1440]: vxlan.calico: Link UP Jul 10 23:34:43.538433 systemd-networkd[1440]: vxlan.calico: Gained carrier Jul 10 23:34:43.834252 containerd[1506]: time="2025-07-10T23:34:43.833880948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:43.838246 containerd[1506]: time="2025-07-10T23:34:43.838188919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 10 23:34:43.839569 containerd[1506]: time="2025-07-10T23:34:43.839545302Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:43.841916 containerd[1506]: time="2025-07-10T23:34:43.841872786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:43.842691 containerd[1506]: time="2025-07-10T23:34:43.842622024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 911.656016ms" Jul 10 23:34:43.842772 containerd[1506]: time="2025-07-10T23:34:43.842696912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 10 23:34:43.847538 containerd[1506]: time="2025-07-10T23:34:43.847501336Z" level=info msg="CreateContainer within sandbox \"d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 23:34:43.861688 containerd[1506]: time="2025-07-10T23:34:43.861621256Z" level=info msg="Container a683fb0e8f641a9c22adb3f9a4d5001c3abee6c5a66c241f2477ca9ed62efff1: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:43.876761 containerd[1506]: time="2025-07-10T23:34:43.876709958Z" level=info msg="CreateContainer within sandbox \"d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a683fb0e8f641a9c22adb3f9a4d5001c3abee6c5a66c241f2477ca9ed62efff1\"" Jul 10 23:34:43.878865 containerd[1506]: time="2025-07-10T23:34:43.878832861Z" level=info msg="StartContainer for \"a683fb0e8f641a9c22adb3f9a4d5001c3abee6c5a66c241f2477ca9ed62efff1\"" Jul 10 23:34:43.880187 containerd[1506]: time="2025-07-10T23:34:43.880147998Z" level=info msg="connecting to shim a683fb0e8f641a9c22adb3f9a4d5001c3abee6c5a66c241f2477ca9ed62efff1" address="unix:///run/containerd/s/fa9b4ea84aecd58fb71093d657612fb7399092cbbdf977d1319874d1d8ec93c1" protocol=ttrpc version=3 Jul 10 23:34:43.918878 systemd[1]: Started cri-containerd-a683fb0e8f641a9c22adb3f9a4d5001c3abee6c5a66c241f2477ca9ed62efff1.scope - libcontainer container a683fb0e8f641a9c22adb3f9a4d5001c3abee6c5a66c241f2477ca9ed62efff1. Jul 10 23:34:44.021197 containerd[1506]: time="2025-07-10T23:34:44.021093750Z" level=info msg="StartContainer for \"a683fb0e8f641a9c22adb3f9a4d5001c3abee6c5a66c241f2477ca9ed62efff1\" returns successfully" Jul 10 23:34:44.023266 containerd[1506]: time="2025-07-10T23:34:44.023235048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 23:34:44.132268 systemd-networkd[1440]: cali934ca3673aa: Gained IPv6LL Jul 10 23:34:44.766746 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL Jul 10 23:34:46.372514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457654451.mount: Deactivated successfully. Jul 10 23:34:46.393725 containerd[1506]: time="2025-07-10T23:34:46.393626242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:46.394180 containerd[1506]: time="2025-07-10T23:34:46.394150372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 10 23:34:46.395174 containerd[1506]: time="2025-07-10T23:34:46.395122666Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:46.397048 containerd[1506]: time="2025-07-10T23:34:46.397009006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:46.397902 containerd[1506]: time="2025-07-10T23:34:46.397868169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.374593557s" Jul 10 23:34:46.398014 containerd[1506]: time="2025-07-10T23:34:46.397997381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 10 23:34:46.406187 containerd[1506]: time="2025-07-10T23:34:46.406141442Z" level=info msg="CreateContainer within sandbox \"d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 23:34:46.413327 containerd[1506]: time="2025-07-10T23:34:46.413269645Z" level=info msg="Container ff4f7afe5a9f0ef0d0ec3b8f6a84a5928d9d3f258a77531e7880d254a29aa67e: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:46.424069 containerd[1506]: time="2025-07-10T23:34:46.424024836Z" level=info msg="CreateContainer within sandbox \"d01b76f48aee94a9972c14d1221a81773b118932eb410e5ed40e1b223c068b9e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ff4f7afe5a9f0ef0d0ec3b8f6a84a5928d9d3f258a77531e7880d254a29aa67e\"" Jul 10 23:34:46.424754 containerd[1506]: time="2025-07-10T23:34:46.424537485Z" level=info msg="StartContainer for \"ff4f7afe5a9f0ef0d0ec3b8f6a84a5928d9d3f258a77531e7880d254a29aa67e\"" Jul 10 23:34:46.426815 containerd[1506]: time="2025-07-10T23:34:46.426782300Z" level=info msg="connecting to shim ff4f7afe5a9f0ef0d0ec3b8f6a84a5928d9d3f258a77531e7880d254a29aa67e" address="unix:///run/containerd/s/fa9b4ea84aecd58fb71093d657612fb7399092cbbdf977d1319874d1d8ec93c1" protocol=ttrpc version=3 Jul 10 23:34:46.448893 systemd[1]: Started cri-containerd-ff4f7afe5a9f0ef0d0ec3b8f6a84a5928d9d3f258a77531e7880d254a29aa67e.scope - libcontainer container ff4f7afe5a9f0ef0d0ec3b8f6a84a5928d9d3f258a77531e7880d254a29aa67e. Jul 10 23:34:46.488314 containerd[1506]: time="2025-07-10T23:34:46.488259832Z" level=info msg="StartContainer for \"ff4f7afe5a9f0ef0d0ec3b8f6a84a5928d9d3f258a77531e7880d254a29aa67e\" returns successfully" Jul 10 23:34:47.088660 kubelet[2659]: I0710 23:34:47.088336 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58cc5db9cf-tct8w" podStartSLOduration=1.618164201 podStartE2EDuration="5.088320358s" podCreationTimestamp="2025-07-10 23:34:42 +0000 UTC" firstStartedPulling="2025-07-10 23:34:42.930574446 +0000 UTC m=+34.140023951" lastFinishedPulling="2025-07-10 23:34:46.400730603 +0000 UTC m=+37.610180108" observedRunningTime="2025-07-10 23:34:47.0877033 +0000 UTC m=+38.297152805" watchObservedRunningTime="2025-07-10 23:34:47.088320358 +0000 UTC m=+38.297769823" Jul 10 23:34:47.482370 kubelet[2659]: I0710 23:34:47.482324 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:47.612175 containerd[1506]: time="2025-07-10T23:34:47.612044857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6\" id:\"aa4de74db747235b1f016590fedc7bc99b2222cf25a0a25ee641d6be37aea78e\" pid:4218 exited_at:{seconds:1752190487 nanos:611430039}" Jul 10 23:34:47.717539 containerd[1506]: time="2025-07-10T23:34:47.717485485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6\" id:\"dbfa991887a1b02d7bd2e8d832b5d51b51d3baa023accca038f8fa12eb9ab16d\" pid:4243 exited_at:{seconds:1752190487 nanos:716914832}" Jul 10 23:34:49.899916 containerd[1506]: time="2025-07-10T23:34:49.899612415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c4b7586-c9rnq,Uid:b4725a17-8a37-4480-8b8c-9c73b12960d1,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:34:49.899916 containerd[1506]: time="2025-07-10T23:34:49.899612335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5444b768d7-9ffjl,Uid:858fb430-f79b-4666-9fc3-06b2bf0c2228,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:50.064988 systemd-networkd[1440]: cali52134cdeb7a: Link UP Jul 10 23:34:50.065331 systemd-networkd[1440]: cali52134cdeb7a: Gained carrier Jul 10 23:34:50.082115 containerd[1506]: 2025-07-10 23:34:49.979 [INFO][4264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0 calico-apiserver-7f7c4b7586- calico-apiserver b4725a17-8a37-4480-8b8c-9c73b12960d1 849 0 2025-07-10 23:34:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7c4b7586 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f7c4b7586-c9rnq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali52134cdeb7a [] [] }} ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c4b7586-c9rnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-" Jul 10 23:34:50.082115 containerd[1506]: 2025-07-10 23:34:49.979 [INFO][4264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c4b7586-c9rnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" Jul 10 23:34:50.082115 containerd[1506]: 2025-07-10 23:34:50.011 [INFO][4292] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" HandleID="k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Workload="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.011 [INFO][4292] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" HandleID="k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Workload="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f7c4b7586-c9rnq", "timestamp":"2025-07-10 23:34:50.011245384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.011 [INFO][4292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.011 [INFO][4292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.011 [INFO][4292] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.029 [INFO][4292] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" host="localhost" Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.035 [INFO][4292] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.040 [INFO][4292] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.042 [INFO][4292] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.044 [INFO][4292] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:50.082383 containerd[1506]: 2025-07-10 23:34:50.044 [INFO][4292] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" host="localhost" Jul 10 23:34:50.082608 containerd[1506]: 2025-07-10 23:34:50.046 [INFO][4292] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40 Jul 10 23:34:50.082608 containerd[1506]: 2025-07-10 23:34:50.050 [INFO][4292] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" host="localhost" Jul 10 23:34:50.082608 containerd[1506]: 2025-07-10 23:34:50.057 [INFO][4292] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" host="localhost" Jul 10 23:34:50.082608 containerd[1506]: 2025-07-10 23:34:50.057 [INFO][4292] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" host="localhost" Jul 10 23:34:50.082608 containerd[1506]: 2025-07-10 23:34:50.057 [INFO][4292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:50.082608 containerd[1506]: 2025-07-10 23:34:50.057 [INFO][4292] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" HandleID="k8s-pod-network.a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Workload="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" Jul 10 23:34:50.082823 containerd[1506]: 2025-07-10 23:34:50.059 [INFO][4264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c4b7586-c9rnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0", GenerateName:"calico-apiserver-7f7c4b7586-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4725a17-8a37-4480-8b8c-9c73b12960d1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c4b7586", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f7c4b7586-c9rnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52134cdeb7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:50.082895 containerd[1506]: 2025-07-10 23:34:50.059 [INFO][4264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c4b7586-c9rnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" Jul 10 23:34:50.082895 containerd[1506]: 2025-07-10 23:34:50.059 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52134cdeb7a ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c4b7586-c9rnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" Jul 10 23:34:50.082895 containerd[1506]: 2025-07-10 23:34:50.065 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c4b7586-c9rnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" Jul 10 23:34:50.082971 containerd[1506]: 2025-07-10 23:34:50.067 [INFO][4264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c4b7586-c9rnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0", GenerateName:"calico-apiserver-7f7c4b7586-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4725a17-8a37-4480-8b8c-9c73b12960d1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c4b7586", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40", Pod:"calico-apiserver-7f7c4b7586-c9rnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52134cdeb7a", MAC:"42:36:b6:d3:b7:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:50.083025 containerd[1506]: 2025-07-10 23:34:50.079 [INFO][4264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" Namespace="calico-apiserver" Pod="calico-apiserver-7f7c4b7586-c9rnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f7c4b7586--c9rnq-eth0" Jul 10 23:34:50.154622 containerd[1506]: time="2025-07-10T23:34:50.154148272Z" level=info msg="connecting to shim a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40" address="unix:///run/containerd/s/d2f3f451247362b727db69dcc2191dcdce40f9bd3c62cad38a8217410b9fddee" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:50.181305 systemd-networkd[1440]: caliedfdd4eefd6: Link UP Jul 10 23:34:50.181505 systemd-networkd[1440]: caliedfdd4eefd6: Gained carrier Jul 10 23:34:50.188533 systemd[1]: Started cri-containerd-a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40.scope - libcontainer container a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40. Jul 10 23:34:50.201507 containerd[1506]: 2025-07-10 23:34:49.982 [INFO][4276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0 calico-kube-controllers-5444b768d7- calico-system 858fb430-f79b-4666-9fc3-06b2bf0c2228 846 0 2025-07-10 23:34:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5444b768d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5444b768d7-9ffjl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliedfdd4eefd6 [] [] }} ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Namespace="calico-system" Pod="calico-kube-controllers-5444b768d7-9ffjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-" Jul 10 23:34:50.201507 containerd[1506]: 2025-07-10 23:34:49.983 [INFO][4276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Namespace="calico-system" Pod="calico-kube-controllers-5444b768d7-9ffjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" Jul 10 23:34:50.201507 containerd[1506]: 2025-07-10 23:34:50.031 [INFO][4298] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" HandleID="k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Workload="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.032 [INFO][4298] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" HandleID="k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Workload="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5444b768d7-9ffjl", "timestamp":"2025-07-10 23:34:50.031938649 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.032 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.057 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.057 [INFO][4298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.131 [INFO][4298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" host="localhost" Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.138 [INFO][4298] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.151 [INFO][4298] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.154 [INFO][4298] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.157 [INFO][4298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:50.201716 containerd[1506]: 2025-07-10 23:34:50.157 [INFO][4298] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" host="localhost" Jul 10 23:34:50.201928 containerd[1506]: 2025-07-10 23:34:50.159 [INFO][4298] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329 Jul 10 23:34:50.201928 containerd[1506]: 2025-07-10 23:34:50.166 [INFO][4298] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" host="localhost" Jul 10 23:34:50.201928 containerd[1506]: 2025-07-10 23:34:50.176 [INFO][4298] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" host="localhost" Jul 10 23:34:50.201928 containerd[1506]: 2025-07-10 23:34:50.176 [INFO][4298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" host="localhost" Jul 10 23:34:50.201928 containerd[1506]: 2025-07-10 23:34:50.176 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:50.201928 containerd[1506]: 2025-07-10 23:34:50.176 [INFO][4298] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" HandleID="k8s-pod-network.03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Workload="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" Jul 10 23:34:50.202057 containerd[1506]: 2025-07-10 23:34:50.179 [INFO][4276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Namespace="calico-system" Pod="calico-kube-controllers-5444b768d7-9ffjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0", GenerateName:"calico-kube-controllers-5444b768d7-", Namespace:"calico-system", SelfLink:"", UID:"858fb430-f79b-4666-9fc3-06b2bf0c2228", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5444b768d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5444b768d7-9ffjl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedfdd4eefd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:50.202104 containerd[1506]: 2025-07-10 23:34:50.179 [INFO][4276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Namespace="calico-system" Pod="calico-kube-controllers-5444b768d7-9ffjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" Jul 10 23:34:50.202104 containerd[1506]: 2025-07-10 23:34:50.179 [INFO][4276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedfdd4eefd6 ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Namespace="calico-system" Pod="calico-kube-controllers-5444b768d7-9ffjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" Jul 10 23:34:50.202104 containerd[1506]: 2025-07-10 23:34:50.182 [INFO][4276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Namespace="calico-system" Pod="calico-kube-controllers-5444b768d7-9ffjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" Jul 10 23:34:50.202165 containerd[1506]: 2025-07-10 23:34:50.184 [INFO][4276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Namespace="calico-system" Pod="calico-kube-controllers-5444b768d7-9ffjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0", GenerateName:"calico-kube-controllers-5444b768d7-", Namespace:"calico-system", SelfLink:"", UID:"858fb430-f79b-4666-9fc3-06b2bf0c2228", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5444b768d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329", Pod:"calico-kube-controllers-5444b768d7-9ffjl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedfdd4eefd6", MAC:"22:ce:c8:9f:2f:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:50.202211 containerd[1506]: 2025-07-10 23:34:50.196 [INFO][4276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" Namespace="calico-system" Pod="calico-kube-controllers-5444b768d7-9ffjl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5444b768d7--9ffjl-eth0" Jul 10 23:34:50.209603 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:50.225297 containerd[1506]: time="2025-07-10T23:34:50.225258646Z" level=info msg="connecting to shim 03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329" address="unix:///run/containerd/s/20f874f2b75cfef8c09f838756afefa9b071ad690c925e385ba10c762c8aae88" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:50.258597 containerd[1506]: time="2025-07-10T23:34:50.258536557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c4b7586-c9rnq,Uid:b4725a17-8a37-4480-8b8c-9c73b12960d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40\"" Jul 10 23:34:50.260056 systemd[1]: Started cri-containerd-03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329.scope - libcontainer container 03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329. Jul 10 23:34:50.260350 containerd[1506]: time="2025-07-10T23:34:50.260324831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 23:34:50.277391 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:50.327519 containerd[1506]: time="2025-07-10T23:34:50.327373975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5444b768d7-9ffjl,Uid:858fb430-f79b-4666-9fc3-06b2bf0c2228,Namespace:calico-system,Attempt:0,} returns sandbox id \"03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329\"" Jul 10 23:34:50.899219 kubelet[2659]: E0710 23:34:50.899171 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:50.899940 containerd[1506]: time="2025-07-10T23:34:50.899897365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ltc94,Uid:484e1c2d-e63d-4aa0-89fe-b0044406bec6,Namespace:kube-system,Attempt:0,}" Jul 10 23:34:50.900176 containerd[1506]: time="2025-07-10T23:34:50.900059219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf67d85c-cg7hx,Uid:1aa67573-c049-4a50-9fad-b9f78cda01f2,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:34:50.900205 containerd[1506]: time="2025-07-10T23:34:50.900173469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7q7p2,Uid:303da94b-4882-4812-800c-56f0e558072c,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:50.946436 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:56056.service - OpenSSH per-connection server daemon (10.0.0.1:56056). Jul 10 23:34:51.044974 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 56056 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:34:51.047130 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:51.053071 systemd-logind[1478]: New session 8 of user core. Jul 10 23:34:51.058855 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 23:34:51.108617 systemd-networkd[1440]: cali0391d5af07c: Link UP Jul 10 23:34:51.108786 systemd-networkd[1440]: cali0391d5af07c: Gained carrier Jul 10 23:34:51.140373 containerd[1506]: 2025-07-10 23:34:50.963 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ltc94-eth0 coredns-674b8bbfcf- kube-system 484e1c2d-e63d-4aa0-89fe-b0044406bec6 839 0 2025-07-10 23:34:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ltc94 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0391d5af07c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ltc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ltc94-" Jul 10 23:34:51.140373 containerd[1506]: 2025-07-10 23:34:50.963 [INFO][4426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ltc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" Jul 10 23:34:51.140373 containerd[1506]: 2025-07-10 23:34:51.032 [INFO][4470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" HandleID="k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Workload="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.032 [INFO][4470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" HandleID="k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Workload="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ltc94", "timestamp":"2025-07-10 23:34:51.032308523 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.032 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.032 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.032 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.050 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" host="localhost" Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.058 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.074 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.077 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.081 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:51.141017 containerd[1506]: 2025-07-10 23:34:51.081 [INFO][4470] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" host="localhost" Jul 10 23:34:51.141315 containerd[1506]: 2025-07-10 23:34:51.085 [INFO][4470] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b Jul 10 23:34:51.141315 containerd[1506]: 2025-07-10 23:34:51.091 [INFO][4470] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" host="localhost" Jul 10 23:34:51.141315 containerd[1506]: 2025-07-10 23:34:51.098 [INFO][4470] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" host="localhost" Jul 10 23:34:51.141315 containerd[1506]: 2025-07-10 23:34:51.098 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" host="localhost" Jul 10 23:34:51.141315 containerd[1506]: 2025-07-10 23:34:51.098 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:51.141315 containerd[1506]: 2025-07-10 23:34:51.098 [INFO][4470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" HandleID="k8s-pod-network.cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Workload="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" Jul 10 23:34:51.141433 containerd[1506]: 2025-07-10 23:34:51.105 [INFO][4426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ltc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ltc94-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"484e1c2d-e63d-4aa0-89fe-b0044406bec6", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ltc94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0391d5af07c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:51.141536 containerd[1506]: 2025-07-10 23:34:51.106 [INFO][4426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ltc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" Jul 10 23:34:51.141536 containerd[1506]: 2025-07-10 23:34:51.106 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0391d5af07c ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ltc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" Jul 10 23:34:51.141536 containerd[1506]: 2025-07-10 23:34:51.108 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ltc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" Jul 10 23:34:51.141617 containerd[1506]: 2025-07-10 23:34:51.111 [INFO][4426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ltc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ltc94-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"484e1c2d-e63d-4aa0-89fe-b0044406bec6", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b", Pod:"coredns-674b8bbfcf-ltc94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0391d5af07c", MAC:"1a:a0:7d:63:6f:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:51.141617 containerd[1506]: 2025-07-10 23:34:51.131 [INFO][4426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ltc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ltc94-eth0" Jul 10 23:34:51.199866 containerd[1506]: time="2025-07-10T23:34:51.198834430Z" level=info msg="connecting to shim cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b" address="unix:///run/containerd/s/700f22c446441f5aad684d09e1ddd72b9cb6ebf9e070689dc8625783ff55b2cd" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:51.206766 systemd-networkd[1440]: cali4c62c565305: Link UP Jul 10 23:34:51.206966 systemd-networkd[1440]: cali4c62c565305: Gained carrier Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:50.971 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0 calico-apiserver-7bcf67d85c- calico-apiserver 1aa67573-c049-4a50-9fad-b9f78cda01f2 847 0 2025-07-10 23:34:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bcf67d85c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bcf67d85c-cg7hx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c62c565305 [] [] }} ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-cg7hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:50.971 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-cg7hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.040 [INFO][4477] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" HandleID="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.041 [INFO][4477] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" HandleID="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059eda0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bcf67d85c-cg7hx", "timestamp":"2025-07-10 23:34:51.040614223 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.041 [INFO][4477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.098 [INFO][4477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.099 [INFO][4477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.149 [INFO][4477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.160 [INFO][4477] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.169 [INFO][4477] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.172 [INFO][4477] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.176 [INFO][4477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.176 [INFO][4477] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.178 [INFO][4477] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.184 [INFO][4477] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.192 [INFO][4477] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.192 [INFO][4477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" host="localhost" Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.192 [INFO][4477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:51.221548 containerd[1506]: 2025-07-10 23:34:51.192 [INFO][4477] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" HandleID="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:34:51.223967 containerd[1506]: 2025-07-10 23:34:51.201 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-cg7hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0", GenerateName:"calico-apiserver-7bcf67d85c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aa67573-c049-4a50-9fad-b9f78cda01f2", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcf67d85c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bcf67d85c-cg7hx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c62c565305", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:51.223967 containerd[1506]: 2025-07-10 23:34:51.201 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-cg7hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:34:51.223967 containerd[1506]: 2025-07-10 23:34:51.201 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c62c565305 ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-cg7hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:34:51.223967 containerd[1506]: 2025-07-10 23:34:51.204 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-cg7hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:34:51.223967 containerd[1506]: 2025-07-10 23:34:51.206 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-cg7hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0", GenerateName:"calico-apiserver-7bcf67d85c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aa67573-c049-4a50-9fad-b9f78cda01f2", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcf67d85c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b", Pod:"calico-apiserver-7bcf67d85c-cg7hx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c62c565305", MAC:"e6:bb:34:65:2c:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:51.223967 containerd[1506]: 2025-07-10 23:34:51.216 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-cg7hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:34:51.281508 containerd[1506]: time="2025-07-10T23:34:51.281113201Z" level=info msg="connecting to shim dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" address="unix:///run/containerd/s/58beba3f7ce952de7b352160c7403e0673532db1d8a71b82bac983be55b56399" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:51.285905 systemd[1]: Started cri-containerd-cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b.scope - libcontainer container cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b. Jul 10 23:34:51.312503 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:51.333838 systemd-networkd[1440]: calif0b00534835: Link UP Jul 10 23:34:51.335397 systemd-networkd[1440]: calif0b00534835: Gained carrier Jul 10 23:34:51.362135 containerd[1506]: time="2025-07-10T23:34:51.362098703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ltc94,Uid:484e1c2d-e63d-4aa0-89fe-b0044406bec6,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b\"" Jul 10 23:34:51.363515 kubelet[2659]: E0710 23:34:51.363442 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:50.992 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0 goldmane-768f4c5c69- calico-system 303da94b-4882-4812-800c-56f0e558072c 848 0 2025-07-10 23:34:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-7q7p2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif0b00534835 [] [] }} ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Namespace="calico-system" Pod="goldmane-768f4c5c69-7q7p2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7q7p2-" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:50.992 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Namespace="calico-system" Pod="goldmane-768f4c5c69-7q7p2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.048 [INFO][4486] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" HandleID="k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Workload="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.049 [INFO][4486] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" HandleID="k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Workload="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-7q7p2", "timestamp":"2025-07-10 23:34:51.048625098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.049 [INFO][4486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.192 [INFO][4486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.192 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.254 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.271 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.279 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.283 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.288 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.288 [INFO][4486] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.293 [INFO][4486] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.305 [INFO][4486] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.321 [INFO][4486] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.321 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" host="localhost" Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.321 [INFO][4486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:51.372019 containerd[1506]: 2025-07-10 23:34:51.321 [INFO][4486] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" HandleID="k8s-pod-network.0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Workload="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" Jul 10 23:34:51.372505 containerd[1506]: 2025-07-10 23:34:51.329 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Namespace="calico-system" Pod="goldmane-768f4c5c69-7q7p2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"303da94b-4882-4812-800c-56f0e558072c", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-7q7p2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0b00534835", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:51.372505 containerd[1506]: 2025-07-10 23:34:51.329 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Namespace="calico-system" Pod="goldmane-768f4c5c69-7q7p2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" Jul 10 23:34:51.372505 containerd[1506]: 2025-07-10 23:34:51.329 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0b00534835 ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Namespace="calico-system" Pod="goldmane-768f4c5c69-7q7p2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" Jul 10 23:34:51.372505 containerd[1506]: 2025-07-10 23:34:51.333 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Namespace="calico-system" Pod="goldmane-768f4c5c69-7q7p2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" Jul 10 23:34:51.372505 containerd[1506]: 2025-07-10 23:34:51.337 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Namespace="calico-system" Pod="goldmane-768f4c5c69-7q7p2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"303da94b-4882-4812-800c-56f0e558072c", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f", Pod:"goldmane-768f4c5c69-7q7p2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0b00534835", MAC:"ba:53:8b:48:fd:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:51.372505 containerd[1506]: 2025-07-10 23:34:51.363 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" Namespace="calico-system" Pod="goldmane-768f4c5c69-7q7p2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7q7p2-eth0" Jul 10 23:34:51.375225 containerd[1506]: time="2025-07-10T23:34:51.375195646Z" level=info msg="CreateContainer within sandbox \"cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:34:51.386247 systemd[1]: Started cri-containerd-dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b.scope - libcontainer container dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b. Jul 10 23:34:51.412798 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:51.422766 systemd-networkd[1440]: caliedfdd4eefd6: Gained IPv6LL Jul 10 23:34:51.431180 containerd[1506]: time="2025-07-10T23:34:51.430952063Z" level=info msg="connecting to shim 0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f" address="unix:///run/containerd/s/75b730e9122f252dd397f12af3d74147a8b85a41297eff0113bb9d9e4cefb38d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:51.446653 containerd[1506]: time="2025-07-10T23:34:51.446436727Z" level=info msg="Container e35317e241cac7dbadd6639ea5e6d1b01c16d2b0b3a7acd9d0c486deae78c585: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:51.456156 containerd[1506]: time="2025-07-10T23:34:51.456097101Z" level=info msg="CreateContainer within sandbox \"cc89f852ef353220a7087ad077f30979473cc4fa78a972e93e7c01725da33d9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e35317e241cac7dbadd6639ea5e6d1b01c16d2b0b3a7acd9d0c486deae78c585\"" Jul 10 23:34:51.456697 containerd[1506]: time="2025-07-10T23:34:51.456626305Z" level=info msg="StartContainer for \"e35317e241cac7dbadd6639ea5e6d1b01c16d2b0b3a7acd9d0c486deae78c585\"" Jul 10 23:34:51.458306 containerd[1506]: time="2025-07-10T23:34:51.458177116Z" level=info msg="connecting to shim e35317e241cac7dbadd6639ea5e6d1b01c16d2b0b3a7acd9d0c486deae78c585" address="unix:///run/containerd/s/700f22c446441f5aad684d09e1ddd72b9cb6ebf9e070689dc8625783ff55b2cd" protocol=ttrpc version=3 Jul 10 23:34:51.474798 containerd[1506]: time="2025-07-10T23:34:51.474757233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf67d85c-cg7hx,Uid:1aa67573-c049-4a50-9fad-b9f78cda01f2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\"" Jul 10 23:34:51.476834 systemd[1]: Started cri-containerd-0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f.scope - libcontainer container 0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f. Jul 10 23:34:51.492307 systemd[1]: Started cri-containerd-e35317e241cac7dbadd6639ea5e6d1b01c16d2b0b3a7acd9d0c486deae78c585.scope - libcontainer container e35317e241cac7dbadd6639ea5e6d1b01c16d2b0b3a7acd9d0c486deae78c585. Jul 10 23:34:51.501749 sshd[4497]: Connection closed by 10.0.0.1 port 56056 Jul 10 23:34:51.502384 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Jul 10 23:34:51.506993 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:56056.service: Deactivated successfully. Jul 10 23:34:51.512459 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 23:34:51.513944 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Jul 10 23:34:51.516052 systemd-logind[1478]: Removed session 8. Jul 10 23:34:51.522162 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:51.544926 containerd[1506]: time="2025-07-10T23:34:51.544835176Z" level=info msg="StartContainer for \"e35317e241cac7dbadd6639ea5e6d1b01c16d2b0b3a7acd9d0c486deae78c585\" returns successfully" Jul 10 23:34:51.563179 containerd[1506]: time="2025-07-10T23:34:51.563105114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7q7p2,Uid:303da94b-4882-4812-800c-56f0e558072c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f\"" Jul 10 23:34:51.616746 systemd-networkd[1440]: cali52134cdeb7a: Gained IPv6LL Jul 10 23:34:51.899750 containerd[1506]: time="2025-07-10T23:34:51.899616060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf67d85c-7lnr7,Uid:10e61f78-d593-4fa1-ae09-fcdfd9ac29bc,Namespace:calico-apiserver,Attempt:0,}" Jul 10 23:34:51.978270 containerd[1506]: time="2025-07-10T23:34:51.978226602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:51.978615 containerd[1506]: time="2025-07-10T23:34:51.978541508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 10 23:34:51.979468 containerd[1506]: time="2025-07-10T23:34:51.979410222Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:51.981581 containerd[1506]: time="2025-07-10T23:34:51.981549962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:51.982609 containerd[1506]: time="2025-07-10T23:34:51.982533045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.72216909s" Jul 10 23:34:51.982752 containerd[1506]: time="2025-07-10T23:34:51.982728501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 23:34:51.984744 containerd[1506]: time="2025-07-10T23:34:51.984681106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 23:34:51.987109 containerd[1506]: time="2025-07-10T23:34:51.986644711Z" level=info msg="CreateContainer within sandbox \"a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 23:34:51.998696 containerd[1506]: time="2025-07-10T23:34:51.998618960Z" level=info msg="Container 4f72bd721c1ce34feb0ba9552fe8b234b6eb61a5d7afa64944002a29d7d8bef1: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:52.006792 containerd[1506]: time="2025-07-10T23:34:52.006740112Z" level=info msg="CreateContainer within sandbox \"a31a976c466963881303500f99371c37f72fd95dbec9335859e94ba3e1191d40\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4f72bd721c1ce34feb0ba9552fe8b234b6eb61a5d7afa64944002a29d7d8bef1\"" Jul 10 23:34:52.007288 containerd[1506]: time="2025-07-10T23:34:52.007260595Z" level=info msg="StartContainer for \"4f72bd721c1ce34feb0ba9552fe8b234b6eb61a5d7afa64944002a29d7d8bef1\"" Jul 10 23:34:52.009248 containerd[1506]: time="2025-07-10T23:34:52.009221876Z" level=info msg="connecting to shim 4f72bd721c1ce34feb0ba9552fe8b234b6eb61a5d7afa64944002a29d7d8bef1" address="unix:///run/containerd/s/d2f3f451247362b727db69dcc2191dcdce40f9bd3c62cad38a8217410b9fddee" protocol=ttrpc version=3 Jul 10 23:34:52.028815 systemd[1]: Started cri-containerd-4f72bd721c1ce34feb0ba9552fe8b234b6eb61a5d7afa64944002a29d7d8bef1.scope - libcontainer container 4f72bd721c1ce34feb0ba9552fe8b234b6eb61a5d7afa64944002a29d7d8bef1. Jul 10 23:34:52.033125 systemd-networkd[1440]: cali13df9d8a008: Link UP Jul 10 23:34:52.033299 systemd-networkd[1440]: cali13df9d8a008: Gained carrier Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:51.956 [INFO][4724] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0 calico-apiserver-7bcf67d85c- calico-apiserver 10e61f78-d593-4fa1-ae09-fcdfd9ac29bc 851 0 2025-07-10 23:34:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bcf67d85c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bcf67d85c-7lnr7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali13df9d8a008 [] [] }} ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-7lnr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:51.956 [INFO][4724] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-7lnr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:51.981 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" HandleID="k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:51.981 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" HandleID="k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bcf67d85c-7lnr7", "timestamp":"2025-07-10 23:34:51.981152648 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:51.981 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:51.981 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:51.981 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:51.992 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.001 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.006 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.008 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.013 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.013 [INFO][4739] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.015 [INFO][4739] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923 Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.019 [INFO][4739] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.026 [INFO][4739] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.026 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" host="localhost" Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.026 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:52.047454 containerd[1506]: 2025-07-10 23:34:52.026 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" HandleID="k8s-pod-network.27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" Jul 10 23:34:52.047985 containerd[1506]: 2025-07-10 23:34:52.030 [INFO][4724] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-7lnr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0", GenerateName:"calico-apiserver-7bcf67d85c-", Namespace:"calico-apiserver", SelfLink:"", UID:"10e61f78-d593-4fa1-ae09-fcdfd9ac29bc", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcf67d85c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bcf67d85c-7lnr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13df9d8a008", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:52.047985 containerd[1506]: 2025-07-10 23:34:52.030 [INFO][4724] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-7lnr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" Jul 10 23:34:52.047985 containerd[1506]: 2025-07-10 23:34:52.030 [INFO][4724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13df9d8a008 ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-7lnr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" Jul 10 23:34:52.047985 containerd[1506]: 2025-07-10 23:34:52.032 [INFO][4724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-7lnr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" Jul 10 23:34:52.047985 containerd[1506]: 2025-07-10 23:34:52.034 [INFO][4724] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-7lnr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0", GenerateName:"calico-apiserver-7bcf67d85c-", Namespace:"calico-apiserver", SelfLink:"", UID:"10e61f78-d593-4fa1-ae09-fcdfd9ac29bc", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcf67d85c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923", Pod:"calico-apiserver-7bcf67d85c-7lnr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13df9d8a008", MAC:"42:09:3e:73:4f:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:52.047985 containerd[1506]: 2025-07-10 23:34:52.044 [INFO][4724] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" Namespace="calico-apiserver" Pod="calico-apiserver-7bcf67d85c-7lnr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bcf67d85c--7lnr7-eth0" Jul 10 23:34:52.074077 containerd[1506]: time="2025-07-10T23:34:52.074040173Z" level=info msg="connecting to shim 27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923" address="unix:///run/containerd/s/a2c40479b7b35e4b054c66c3ced506cc192af837f8c1ed1f9b1d32164729d617" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:52.089914 containerd[1506]: time="2025-07-10T23:34:52.089863115Z" level=info msg="StartContainer for \"4f72bd721c1ce34feb0ba9552fe8b234b6eb61a5d7afa64944002a29d7d8bef1\" returns successfully" Jul 10 23:34:52.101807 systemd[1]: Started cri-containerd-27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923.scope - libcontainer container 27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923. Jul 10 23:34:52.117735 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:52.138468 kubelet[2659]: E0710 23:34:52.136458 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:52.142545 kubelet[2659]: I0710 23:34:52.142304 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f7c4b7586-c9rnq" podStartSLOduration=25.418362312 podStartE2EDuration="27.142287831s" podCreationTimestamp="2025-07-10 23:34:25 +0000 UTC" firstStartedPulling="2025-07-10 23:34:50.25996792 +0000 UTC m=+41.469417425" lastFinishedPulling="2025-07-10 23:34:51.983893399 +0000 UTC m=+43.193342944" observedRunningTime="2025-07-10 23:34:52.14178355 +0000 UTC m=+43.351233055" watchObservedRunningTime="2025-07-10 23:34:52.142287831 +0000 UTC m=+43.351737336" Jul 10 23:34:52.156677 kubelet[2659]: I0710 23:34:52.156418 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ltc94" podStartSLOduration=37.156401553 podStartE2EDuration="37.156401553s" podCreationTimestamp="2025-07-10 23:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:34:52.155601128 +0000 UTC m=+43.365050633" watchObservedRunningTime="2025-07-10 23:34:52.156401553 +0000 UTC m=+43.365851058" Jul 10 23:34:52.164686 containerd[1506]: time="2025-07-10T23:34:52.164590148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcf67d85c-7lnr7,Uid:10e61f78-d593-4fa1-ae09-fcdfd9ac29bc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923\"" Jul 10 23:34:52.170284 containerd[1506]: time="2025-07-10T23:34:52.170107202Z" level=info msg="CreateContainer within sandbox \"27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 23:34:52.175889 containerd[1506]: time="2025-07-10T23:34:52.175854435Z" level=info msg="Container 9a58a608426a9e88330941af368a4c7ae9bed8fa958cc2f6de009f0b6536321f: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:52.181885 containerd[1506]: time="2025-07-10T23:34:52.181847888Z" level=info msg="CreateContainer within sandbox \"27332cefde92e7dd5a37bb2e78cf4d62218de2e64428599d49201206381d4923\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9a58a608426a9e88330941af368a4c7ae9bed8fa958cc2f6de009f0b6536321f\"" Jul 10 23:34:52.183413 containerd[1506]: time="2025-07-10T23:34:52.183388255Z" level=info msg="StartContainer for \"9a58a608426a9e88330941af368a4c7ae9bed8fa958cc2f6de009f0b6536321f\"" Jul 10 23:34:52.186174 containerd[1506]: time="2025-07-10T23:34:52.186139882Z" level=info msg="connecting to shim 9a58a608426a9e88330941af368a4c7ae9bed8fa958cc2f6de009f0b6536321f" address="unix:///run/containerd/s/a2c40479b7b35e4b054c66c3ced506cc192af837f8c1ed1f9b1d32164729d617" protocol=ttrpc version=3 Jul 10 23:34:52.206812 systemd[1]: Started cri-containerd-9a58a608426a9e88330941af368a4c7ae9bed8fa958cc2f6de009f0b6536321f.scope - libcontainer container 9a58a608426a9e88330941af368a4c7ae9bed8fa958cc2f6de009f0b6536321f. Jul 10 23:34:52.255753 systemd-networkd[1440]: cali0391d5af07c: Gained IPv6LL Jul 10 23:34:52.258913 containerd[1506]: time="2025-07-10T23:34:52.258861509Z" level=info msg="StartContainer for \"9a58a608426a9e88330941af368a4c7ae9bed8fa958cc2f6de009f0b6536321f\" returns successfully" Jul 10 23:34:52.383748 systemd-networkd[1440]: cali4c62c565305: Gained IPv6LL Jul 10 23:34:52.638809 systemd-networkd[1440]: calif0b00534835: Gained IPv6LL Jul 10 23:34:52.901073 kubelet[2659]: E0710 23:34:52.900977 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:52.902393 containerd[1506]: time="2025-07-10T23:34:52.902331885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8kb7,Uid:51f1e356-7683-46d3-a698-c1e84639d1c3,Namespace:kube-system,Attempt:0,}" Jul 10 23:34:53.089961 systemd-networkd[1440]: califf6303a8c71: Link UP Jul 10 23:34:53.091865 systemd-networkd[1440]: califf6303a8c71: Gained carrier Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:52.975 [INFO][4884] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0 coredns-674b8bbfcf- kube-system 51f1e356-7683-46d3-a698-c1e84639d1c3 843 0 2025-07-10 23:34:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-s8kb7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf6303a8c71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8kb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s8kb7-" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:52.976 [INFO][4884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8kb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.011 [INFO][4899] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" HandleID="k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Workload="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.011 [INFO][4899] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" HandleID="k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Workload="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003227f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-s8kb7", "timestamp":"2025-07-10 23:34:53.011228711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.011 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.011 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.011 [INFO][4899] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.032 [INFO][4899] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.042 [INFO][4899] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.048 [INFO][4899] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.051 [INFO][4899] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.056 [INFO][4899] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.056 [INFO][4899] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.060 [INFO][4899] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.066 [INFO][4899] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.075 [INFO][4899] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.075 [INFO][4899] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" host="localhost" Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.075 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:53.111396 containerd[1506]: 2025-07-10 23:34:53.075 [INFO][4899] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" HandleID="k8s-pod-network.55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Workload="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" Jul 10 23:34:53.112197 containerd[1506]: 2025-07-10 23:34:53.080 [INFO][4884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8kb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"51f1e356-7683-46d3-a698-c1e84639d1c3", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-s8kb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf6303a8c71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:53.112197 containerd[1506]: 2025-07-10 23:34:53.080 [INFO][4884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8kb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" Jul 10 23:34:53.112197 containerd[1506]: 2025-07-10 23:34:53.080 [INFO][4884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf6303a8c71 ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8kb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" Jul 10 23:34:53.112197 containerd[1506]: 2025-07-10 23:34:53.092 [INFO][4884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8kb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" Jul 10 23:34:53.112197 containerd[1506]: 2025-07-10 23:34:53.093 [INFO][4884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8kb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"51f1e356-7683-46d3-a698-c1e84639d1c3", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c", Pod:"coredns-674b8bbfcf-s8kb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf6303a8c71", MAC:"76:34:b0:af:2b:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:53.112197 containerd[1506]: 2025-07-10 23:34:53.105 [INFO][4884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8kb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s8kb7-eth0" Jul 10 23:34:53.142362 containerd[1506]: time="2025-07-10T23:34:53.142304548Z" level=info msg="connecting to shim 55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c" address="unix:///run/containerd/s/c2ad20c0cb48fbde459436ba3a95cae90fe7b2c87b4bec16c1f4b6efbdde50ca" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:53.143552 kubelet[2659]: E0710 23:34:53.143523 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:53.144167 kubelet[2659]: I0710 23:34:53.143858 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:53.159500 kubelet[2659]: I0710 23:34:53.159324 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bcf67d85c-7lnr7" podStartSLOduration=29.159305197 podStartE2EDuration="29.159305197s" podCreationTimestamp="2025-07-10 23:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:34:53.158950089 +0000 UTC m=+44.368399594" watchObservedRunningTime="2025-07-10 23:34:53.159305197 +0000 UTC m=+44.368754702" Jul 10 23:34:53.215942 systemd[1]: Started cri-containerd-55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c.scope - libcontainer container 55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c. Jul 10 23:34:53.232962 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:53.274402 containerd[1506]: time="2025-07-10T23:34:53.274040398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8kb7,Uid:51f1e356-7683-46d3-a698-c1e84639d1c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c\"" Jul 10 23:34:53.275902 kubelet[2659]: E0710 23:34:53.275872 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:53.283415 containerd[1506]: time="2025-07-10T23:34:53.283376710Z" level=info msg="CreateContainer within sandbox \"55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:34:53.317917 containerd[1506]: time="2025-07-10T23:34:53.317586106Z" level=info msg="Container 10ea95fc71f923665cca828c129056f38264cb75eb1fe1c176eaf368a93a364a: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:53.359844 containerd[1506]: time="2025-07-10T23:34:53.359786024Z" level=info msg="CreateContainer within sandbox \"55fa174148abd5a11fe767bec036db65dccebed2fe7370c53a4f16894621618c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10ea95fc71f923665cca828c129056f38264cb75eb1fe1c176eaf368a93a364a\"" Jul 10 23:34:53.360524 containerd[1506]: time="2025-07-10T23:34:53.360493921Z" level=info msg="StartContainer for \"10ea95fc71f923665cca828c129056f38264cb75eb1fe1c176eaf368a93a364a\"" Jul 10 23:34:53.363546 containerd[1506]: time="2025-07-10T23:34:53.363346951Z" level=info msg="connecting to shim 10ea95fc71f923665cca828c129056f38264cb75eb1fe1c176eaf368a93a364a" address="unix:///run/containerd/s/c2ad20c0cb48fbde459436ba3a95cae90fe7b2c87b4bec16c1f4b6efbdde50ca" protocol=ttrpc version=3 Jul 10 23:34:53.388853 systemd[1]: Started cri-containerd-10ea95fc71f923665cca828c129056f38264cb75eb1fe1c176eaf368a93a364a.scope - libcontainer container 10ea95fc71f923665cca828c129056f38264cb75eb1fe1c176eaf368a93a364a. Jul 10 23:34:53.440439 containerd[1506]: time="2025-07-10T23:34:53.440340192Z" level=info msg="StartContainer for \"10ea95fc71f923665cca828c129056f38264cb75eb1fe1c176eaf368a93a364a\" returns successfully" Jul 10 23:34:53.900022 containerd[1506]: time="2025-07-10T23:34:53.899973652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx5zr,Uid:6e5a2663-9022-49fc-bebc-a43168cdc7dc,Namespace:calico-system,Attempt:0,}" Jul 10 23:34:53.919073 systemd-networkd[1440]: cali13df9d8a008: Gained IPv6LL Jul 10 23:34:53.932717 containerd[1506]: time="2025-07-10T23:34:53.931056556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:53.934912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798305354.mount: Deactivated successfully. Jul 10 23:34:53.940862 containerd[1506]: time="2025-07-10T23:34:53.932640003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 10 23:34:53.940992 containerd[1506]: time="2025-07-10T23:34:53.934906746Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:53.955280 containerd[1506]: time="2025-07-10T23:34:53.953929798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:53.955280 containerd[1506]: time="2025-07-10T23:34:53.954796468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.970081679s" Jul 10 23:34:53.955280 containerd[1506]: time="2025-07-10T23:34:53.954824430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 10 23:34:53.956498 containerd[1506]: time="2025-07-10T23:34:53.956471762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 23:34:53.973996 containerd[1506]: time="2025-07-10T23:34:53.973954011Z" level=info msg="CreateContainer within sandbox \"03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 23:34:53.986404 containerd[1506]: time="2025-07-10T23:34:53.982899731Z" level=info msg="Container 9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:53.988967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329881803.mount: Deactivated successfully. Jul 10 23:34:53.997061 containerd[1506]: time="2025-07-10T23:34:53.996586233Z" level=info msg="CreateContainer within sandbox \"03b355b6d5b523a41d09a3ec394206a9304ecf98c4c6f5de93c850ba6a5d3329\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c\"" Jul 10 23:34:53.997918 containerd[1506]: time="2025-07-10T23:34:53.997867177Z" level=info msg="StartContainer for \"9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c\"" Jul 10 23:34:53.999235 containerd[1506]: time="2025-07-10T23:34:53.999202044Z" level=info msg="connecting to shim 9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c" address="unix:///run/containerd/s/20f874f2b75cfef8c09f838756afefa9b071ad690c925e385ba10c762c8aae88" protocol=ttrpc version=3 Jul 10 23:34:54.027836 systemd[1]: Started cri-containerd-9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c.scope - libcontainer container 9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c. Jul 10 23:34:54.081938 systemd-networkd[1440]: cali2ba14788b2c: Link UP Jul 10 23:34:54.083028 systemd-networkd[1440]: cali2ba14788b2c: Gained carrier Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:53.998 [INFO][5006] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rx5zr-eth0 csi-node-driver- calico-system 6e5a2663-9022-49fc-bebc-a43168cdc7dc 720 0 2025-07-10 23:34:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rx5zr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2ba14788b2c [] [] }} ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Namespace="calico-system" Pod="csi-node-driver-rx5zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx5zr-" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:53.998 [INFO][5006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Namespace="calico-system" Pod="csi-node-driver-rx5zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx5zr-eth0" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.030 [INFO][5028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" HandleID="k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Workload="localhost-k8s-csi--node--driver--rx5zr-eth0" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.030 [INFO][5028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" HandleID="k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Workload="localhost-k8s-csi--node--driver--rx5zr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rx5zr", "timestamp":"2025-07-10 23:34:54.030321981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.030 [INFO][5028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.030 [INFO][5028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.030 [INFO][5028] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.043 [INFO][5028] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.049 [INFO][5028] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.055 [INFO][5028] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.058 [INFO][5028] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.060 [INFO][5028] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.060 [INFO][5028] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.062 [INFO][5028] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63 Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.066 [INFO][5028] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.074 [INFO][5028] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.074 [INFO][5028] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" host="localhost" Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.074 [INFO][5028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:34:54.104055 containerd[1506]: 2025-07-10 23:34:54.074 [INFO][5028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" HandleID="k8s-pod-network.489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Workload="localhost-k8s-csi--node--driver--rx5zr-eth0" Jul 10 23:34:54.104569 containerd[1506]: 2025-07-10 23:34:54.079 [INFO][5006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Namespace="calico-system" Pod="csi-node-driver-rx5zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx5zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rx5zr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e5a2663-9022-49fc-bebc-a43168cdc7dc", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rx5zr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ba14788b2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:54.104569 containerd[1506]: 2025-07-10 23:34:54.079 [INFO][5006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Namespace="calico-system" Pod="csi-node-driver-rx5zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx5zr-eth0" Jul 10 23:34:54.104569 containerd[1506]: 2025-07-10 23:34:54.079 [INFO][5006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ba14788b2c ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Namespace="calico-system" Pod="csi-node-driver-rx5zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx5zr-eth0" Jul 10 23:34:54.104569 containerd[1506]: 2025-07-10 23:34:54.083 [INFO][5006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Namespace="calico-system" Pod="csi-node-driver-rx5zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx5zr-eth0" Jul 10 23:34:54.104569 containerd[1506]: 2025-07-10 23:34:54.083 [INFO][5006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Namespace="calico-system" Pod="csi-node-driver-rx5zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx5zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rx5zr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e5a2663-9022-49fc-bebc-a43168cdc7dc", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 23, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63", Pod:"csi-node-driver-rx5zr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ba14788b2c", MAC:"8a:0d:95:5e:fe:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 23:34:54.104569 containerd[1506]: 2025-07-10 23:34:54.097 [INFO][5006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" Namespace="calico-system" Pod="csi-node-driver-rx5zr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx5zr-eth0" Jul 10 23:34:54.108081 containerd[1506]: time="2025-07-10T23:34:54.108042830Z" level=info msg="StartContainer for \"9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c\" returns successfully" Jul 10 23:34:54.130621 containerd[1506]: time="2025-07-10T23:34:54.130561126Z" level=info msg="connecting to shim 489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63" address="unix:///run/containerd/s/3abe0f446aefdfce01361b505777556fe8a627b1d5a80db8835160cc4a5fc1a7" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:34:54.157401 kubelet[2659]: E0710 23:34:54.156865 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:54.159166 kubelet[2659]: I0710 23:34:54.157955 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:54.164801 kubelet[2659]: E0710 23:34:54.164745 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:54.169495 systemd[1]: Started cri-containerd-489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63.scope - libcontainer container 489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63. Jul 10 23:34:54.172922 kubelet[2659]: I0710 23:34:54.172364 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5444b768d7-9ffjl" podStartSLOduration=20.544458378 podStartE2EDuration="24.172345942s" podCreationTimestamp="2025-07-10 23:34:30 +0000 UTC" firstStartedPulling="2025-07-10 23:34:50.328435867 +0000 UTC m=+41.537885372" lastFinishedPulling="2025-07-10 23:34:53.956323431 +0000 UTC m=+45.165772936" observedRunningTime="2025-07-10 23:34:54.172317099 +0000 UTC m=+45.381766604" watchObservedRunningTime="2025-07-10 23:34:54.172345942 +0000 UTC m=+45.381795487" Jul 10 23:34:54.188989 kubelet[2659]: I0710 23:34:54.188013 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s8kb7" podStartSLOduration=39.187996216 podStartE2EDuration="39.187996216s" podCreationTimestamp="2025-07-10 23:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:34:54.187492576 +0000 UTC m=+45.396942041" watchObservedRunningTime="2025-07-10 23:34:54.187996216 +0000 UTC m=+45.397445721" Jul 10 23:34:54.206256 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:34:54.265517 containerd[1506]: time="2025-07-10T23:34:54.265415082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx5zr,Uid:6e5a2663-9022-49fc-bebc-a43168cdc7dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63\"" Jul 10 23:34:54.280101 containerd[1506]: time="2025-07-10T23:34:54.280053596Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:54.280594 containerd[1506]: time="2025-07-10T23:34:54.280564077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 23:34:54.282861 containerd[1506]: time="2025-07-10T23:34:54.282751969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 326.247284ms" Jul 10 23:34:54.282861 containerd[1506]: time="2025-07-10T23:34:54.282825335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 23:34:54.285524 containerd[1506]: time="2025-07-10T23:34:54.284706483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 23:34:54.290047 containerd[1506]: time="2025-07-10T23:34:54.289967418Z" level=info msg="CreateContainer within sandbox \"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 23:34:54.298320 containerd[1506]: time="2025-07-10T23:34:54.298280314Z" level=info msg="Container 905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:54.308938 containerd[1506]: time="2025-07-10T23:34:54.308888990Z" level=info msg="CreateContainer within sandbox \"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\"" Jul 10 23:34:54.309933 containerd[1506]: time="2025-07-10T23:34:54.309887869Z" level=info msg="StartContainer for \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\"" Jul 10 23:34:54.311416 containerd[1506]: time="2025-07-10T23:34:54.310974435Z" level=info msg="connecting to shim 905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9" address="unix:///run/containerd/s/58beba3f7ce952de7b352160c7403e0673532db1d8a71b82bac983be55b56399" protocol=ttrpc version=3 Jul 10 23:34:54.343834 systemd[1]: Started cri-containerd-905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9.scope - libcontainer container 905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9. Jul 10 23:34:54.385446 containerd[1506]: time="2025-07-10T23:34:54.385410385Z" level=info msg="StartContainer for \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" returns successfully" Jul 10 23:34:54.878827 systemd-networkd[1440]: califf6303a8c71: Gained IPv6LL Jul 10 23:34:55.163181 kubelet[2659]: I0710 23:34:55.162948 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:55.168770 kubelet[2659]: E0710 23:34:55.164247 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:55.168770 kubelet[2659]: E0710 23:34:55.164319 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:55.967812 systemd-networkd[1440]: cali2ba14788b2c: Gained IPv6LL Jul 10 23:34:56.069729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007574412.mount: Deactivated successfully. Jul 10 23:34:56.165964 kubelet[2659]: I0710 23:34:56.165926 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:56.166853 kubelet[2659]: E0710 23:34:56.166248 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:34:56.501305 containerd[1506]: time="2025-07-10T23:34:56.501257562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:56.501811 containerd[1506]: time="2025-07-10T23:34:56.501736558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 10 23:34:56.503242 containerd[1506]: time="2025-07-10T23:34:56.503193869Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:56.505297 containerd[1506]: time="2025-07-10T23:34:56.505258506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:56.507738 containerd[1506]: time="2025-07-10T23:34:56.506181496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.221279117s" Jul 10 23:34:56.507738 containerd[1506]: time="2025-07-10T23:34:56.506219778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 10 23:34:56.509768 containerd[1506]: time="2025-07-10T23:34:56.509725364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 23:34:56.513837 containerd[1506]: time="2025-07-10T23:34:56.513799313Z" level=info msg="CreateContainer within sandbox \"0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 23:34:56.518773 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:43216.service - OpenSSH per-connection server daemon (10.0.0.1:43216). Jul 10 23:34:56.527758 containerd[1506]: time="2025-07-10T23:34:56.527709648Z" level=info msg="Container 0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:56.544064 containerd[1506]: time="2025-07-10T23:34:56.544016804Z" level=info msg="CreateContainer within sandbox \"0302ad7d7b2ae7981681e1f6ad484c8b3ae8ae421257ca5f957397980520b38f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7\"" Jul 10 23:34:56.544900 containerd[1506]: time="2025-07-10T23:34:56.544875870Z" level=info msg="StartContainer for \"0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7\"" Jul 10 23:34:56.546386 containerd[1506]: time="2025-07-10T23:34:56.546342061Z" level=info msg="connecting to shim 0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7" address="unix:///run/containerd/s/75b730e9122f252dd397f12af3d74147a8b85a41297eff0113bb9d9e4cefb38d" protocol=ttrpc version=3 Jul 10 23:34:56.567839 systemd[1]: Started cri-containerd-0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7.scope - libcontainer container 0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7. Jul 10 23:34:56.594777 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 43216 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:34:56.596990 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:56.602247 systemd-logind[1478]: New session 9 of user core. Jul 10 23:34:56.607987 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 23:34:56.620031 containerd[1506]: time="2025-07-10T23:34:56.619990085Z" level=info msg="StartContainer for \"0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7\" returns successfully" Jul 10 23:34:56.818796 sshd[5210]: Connection closed by 10.0.0.1 port 43216 Jul 10 23:34:56.819330 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jul 10 23:34:56.823566 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:43216.service: Deactivated successfully. Jul 10 23:34:56.825434 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 23:34:56.826309 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Jul 10 23:34:56.828048 systemd-logind[1478]: Removed session 9. Jul 10 23:34:57.183146 kubelet[2659]: I0710 23:34:57.183009 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-7q7p2" podStartSLOduration=22.24015027 podStartE2EDuration="27.182981765s" podCreationTimestamp="2025-07-10 23:34:30 +0000 UTC" firstStartedPulling="2025-07-10 23:34:51.566366069 +0000 UTC m=+42.775815574" lastFinishedPulling="2025-07-10 23:34:56.509197564 +0000 UTC m=+47.718647069" observedRunningTime="2025-07-10 23:34:57.182719705 +0000 UTC m=+48.392169210" watchObservedRunningTime="2025-07-10 23:34:57.182981765 +0000 UTC m=+48.392431230" Jul 10 23:34:57.184329 kubelet[2659]: I0710 23:34:57.183838 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bcf67d85c-cg7hx" podStartSLOduration=30.37738431 podStartE2EDuration="33.183824147s" podCreationTimestamp="2025-07-10 23:34:24 +0000 UTC" firstStartedPulling="2025-07-10 23:34:51.477914579 +0000 UTC m=+42.687364084" lastFinishedPulling="2025-07-10 23:34:54.284354416 +0000 UTC m=+45.493803921" observedRunningTime="2025-07-10 23:34:55.183708737 +0000 UTC m=+46.393158362" watchObservedRunningTime="2025-07-10 23:34:57.183824147 +0000 UTC m=+48.393273612" Jul 10 23:34:57.624987 kubelet[2659]: I0710 23:34:57.624677 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:57.844797 containerd[1506]: time="2025-07-10T23:34:57.844751270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:57.846726 containerd[1506]: time="2025-07-10T23:34:57.846677133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 10 23:34:57.848136 containerd[1506]: time="2025-07-10T23:34:57.848109840Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:57.850872 containerd[1506]: time="2025-07-10T23:34:57.850825842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:57.851418 containerd[1506]: time="2025-07-10T23:34:57.851393884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.341615236s" Jul 10 23:34:57.851455 containerd[1506]: time="2025-07-10T23:34:57.851425407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 10 23:34:57.857199 containerd[1506]: time="2025-07-10T23:34:57.857151273Z" level=info msg="CreateContainer within sandbox \"489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 23:34:57.871471 containerd[1506]: time="2025-07-10T23:34:57.870832331Z" level=info msg="Container de5415f6420a4e6798a2046e7d8121506286e8bc8decdcaf32e82b12789336a0: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:57.884894 containerd[1506]: time="2025-07-10T23:34:57.884764408Z" level=info msg="CreateContainer within sandbox \"489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"de5415f6420a4e6798a2046e7d8121506286e8bc8decdcaf32e82b12789336a0\"" Jul 10 23:34:57.885552 containerd[1506]: time="2025-07-10T23:34:57.885520545Z" level=info msg="StartContainer for \"de5415f6420a4e6798a2046e7d8121506286e8bc8decdcaf32e82b12789336a0\"" Jul 10 23:34:57.887231 containerd[1506]: time="2025-07-10T23:34:57.887186789Z" level=info msg="connecting to shim de5415f6420a4e6798a2046e7d8121506286e8bc8decdcaf32e82b12789336a0" address="unix:///run/containerd/s/3abe0f446aefdfce01361b505777556fe8a627b1d5a80db8835160cc4a5fc1a7" protocol=ttrpc version=3 Jul 10 23:34:57.912860 systemd[1]: Started cri-containerd-de5415f6420a4e6798a2046e7d8121506286e8bc8decdcaf32e82b12789336a0.scope - libcontainer container de5415f6420a4e6798a2046e7d8121506286e8bc8decdcaf32e82b12789336a0. Jul 10 23:34:57.952411 containerd[1506]: time="2025-07-10T23:34:57.952334879Z" level=info msg="StartContainer for \"de5415f6420a4e6798a2046e7d8121506286e8bc8decdcaf32e82b12789336a0\" returns successfully" Jul 10 23:34:57.954225 containerd[1506]: time="2025-07-10T23:34:57.954063767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 23:34:58.176288 kubelet[2659]: I0710 23:34:58.176033 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:34:59.081449 containerd[1506]: time="2025-07-10T23:34:59.081389878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:59.082648 containerd[1506]: time="2025-07-10T23:34:59.082427873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 10 23:34:59.084648 containerd[1506]: time="2025-07-10T23:34:59.084564306Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:59.088542 containerd[1506]: time="2025-07-10T23:34:59.088490789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:34:59.090083 containerd[1506]: time="2025-07-10T23:34:59.090033220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.135900127s" Jul 10 23:34:59.090285 containerd[1506]: time="2025-07-10T23:34:59.090184791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 10 23:34:59.097518 containerd[1506]: time="2025-07-10T23:34:59.097470155Z" level=info msg="CreateContainer within sandbox \"489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 23:34:59.181397 containerd[1506]: time="2025-07-10T23:34:59.181351269Z" level=info msg="Container 5ede1bd28bc62f8e81169aed3572271ebd057a78baabb116d8d5e8c5e4d2dda5: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:34:59.185988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387052940.mount: Deactivated successfully. Jul 10 23:34:59.211628 containerd[1506]: time="2025-07-10T23:34:59.211547721Z" level=info msg="CreateContainer within sandbox \"489b7276c47cc47ee6b441e73a9e330c34d92483c9dc3334fbddfc24a9c98b63\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5ede1bd28bc62f8e81169aed3572271ebd057a78baabb116d8d5e8c5e4d2dda5\"" Jul 10 23:34:59.212380 containerd[1506]: time="2025-07-10T23:34:59.212117642Z" level=info msg="StartContainer for \"5ede1bd28bc62f8e81169aed3572271ebd057a78baabb116d8d5e8c5e4d2dda5\"" Jul 10 23:34:59.213922 containerd[1506]: time="2025-07-10T23:34:59.213869048Z" level=info msg="connecting to shim 5ede1bd28bc62f8e81169aed3572271ebd057a78baabb116d8d5e8c5e4d2dda5" address="unix:///run/containerd/s/3abe0f446aefdfce01361b505777556fe8a627b1d5a80db8835160cc4a5fc1a7" protocol=ttrpc version=3 Jul 10 23:34:59.241884 systemd[1]: Started cri-containerd-5ede1bd28bc62f8e81169aed3572271ebd057a78baabb116d8d5e8c5e4d2dda5.scope - libcontainer container 5ede1bd28bc62f8e81169aed3572271ebd057a78baabb116d8d5e8c5e4d2dda5. Jul 10 23:34:59.295266 containerd[1506]: time="2025-07-10T23:34:59.295145775Z" level=info msg="StartContainer for \"5ede1bd28bc62f8e81169aed3572271ebd057a78baabb116d8d5e8c5e4d2dda5\" returns successfully" Jul 10 23:34:59.991192 kubelet[2659]: I0710 23:34:59.991137 2659 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 23:34:59.994089 kubelet[2659]: I0710 23:34:59.994037 2659 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 23:35:00.198703 kubelet[2659]: I0710 23:35:00.197482 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rx5zr" podStartSLOduration=25.372284454 podStartE2EDuration="30.197334893s" podCreationTimestamp="2025-07-10 23:34:30 +0000 UTC" firstStartedPulling="2025-07-10 23:34:54.266825473 +0000 UTC m=+45.476274978" lastFinishedPulling="2025-07-10 23:34:59.091875912 +0000 UTC m=+50.301325417" observedRunningTime="2025-07-10 23:35:00.195705058 +0000 UTC m=+51.405154603" watchObservedRunningTime="2025-07-10 23:35:00.197334893 +0000 UTC m=+51.406784398" Jul 10 23:35:01.834080 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:43234.service - OpenSSH per-connection server daemon (10.0.0.1:43234). Jul 10 23:35:01.907084 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 43234 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:01.908956 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:01.913719 systemd-logind[1478]: New session 10 of user core. Jul 10 23:35:01.923859 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 23:35:02.073498 sshd[5317]: Connection closed by 10.0.0.1 port 43234 Jul 10 23:35:02.073847 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:02.081822 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:43234.service: Deactivated successfully. Jul 10 23:35:02.083663 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 23:35:02.084405 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Jul 10 23:35:02.087342 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:43240.service - OpenSSH per-connection server daemon (10.0.0.1:43240). Jul 10 23:35:02.088130 systemd-logind[1478]: Removed session 10. Jul 10 23:35:02.142311 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 43240 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:02.143755 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:02.147988 systemd-logind[1478]: New session 11 of user core. Jul 10 23:35:02.160817 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 23:35:02.364334 sshd[5333]: Connection closed by 10.0.0.1 port 43240 Jul 10 23:35:02.364587 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:02.379513 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:43240.service: Deactivated successfully. Jul 10 23:35:02.385528 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 23:35:02.388831 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Jul 10 23:35:02.390849 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:43252.service - OpenSSH per-connection server daemon (10.0.0.1:43252). Jul 10 23:35:02.392601 systemd-logind[1478]: Removed session 11. Jul 10 23:35:02.449986 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 43252 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:02.451266 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:02.455199 systemd-logind[1478]: New session 12 of user core. Jul 10 23:35:02.462821 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 23:35:02.607511 sshd[5347]: Connection closed by 10.0.0.1 port 43252 Jul 10 23:35:02.607933 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:02.612755 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:43252.service: Deactivated successfully. Jul 10 23:35:02.615617 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 23:35:02.617093 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Jul 10 23:35:02.618500 systemd-logind[1478]: Removed session 12. Jul 10 23:35:04.472888 kubelet[2659]: I0710 23:35:04.472844 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:35:04.603789 containerd[1506]: time="2025-07-10T23:35:04.603474816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7\" id:\"7f5d2c13b191708602d45970e9ff741075628bb487eb5b53882a052ba07aba85\" pid:5384 exited_at:{seconds:1752190504 nanos:603135953}" Jul 10 23:35:04.685646 containerd[1506]: time="2025-07-10T23:35:04.685605312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7\" id:\"78685eb0fb82ac5cf4cdbd432b19e4819e88b28e3a02000061e82d1423f5cef8\" pid:5409 exited_at:{seconds:1752190504 nanos:685323933}" Jul 10 23:35:05.900147 kubelet[2659]: I0710 23:35:05.900096 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:35:05.951938 containerd[1506]: time="2025-07-10T23:35:05.951827583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c\" id:\"17706204470106852b2da5cc82d9b80a523b7e5fccbbcd06fc91c132f0d4cdb5\" pid:5434 exited_at:{seconds:1752190505 nanos:946384903}" Jul 10 23:35:06.000108 containerd[1506]: time="2025-07-10T23:35:06.000063891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c\" id:\"dbd6ebdebc1323d3149506d1ff1212c3b7428f39208456de08aabd29a09d2ae2\" pid:5456 exited_at:{seconds:1752190505 nanos:999792633}" Jul 10 23:35:07.538923 kubelet[2659]: I0710 23:35:07.538858 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:35:07.628805 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:53366.service - OpenSSH per-connection server daemon (10.0.0.1:53366). Jul 10 23:35:07.721032 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 53366 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:07.722449 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:07.727117 systemd-logind[1478]: New session 13 of user core. Jul 10 23:35:07.738888 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 23:35:07.945943 sshd[5471]: Connection closed by 10.0.0.1 port 53366 Jul 10 23:35:07.946932 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:07.951589 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:53366.service: Deactivated successfully. Jul 10 23:35:07.953826 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 23:35:07.954709 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Jul 10 23:35:07.956415 systemd-logind[1478]: Removed session 13. Jul 10 23:35:12.960332 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:60820.service - OpenSSH per-connection server daemon (10.0.0.1:60820). Jul 10 23:35:13.016125 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 60820 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:13.017771 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:13.025475 systemd-logind[1478]: New session 14 of user core. Jul 10 23:35:13.032847 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 23:35:13.204603 sshd[5492]: Connection closed by 10.0.0.1 port 60820 Jul 10 23:35:13.205100 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:13.212838 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:60820.service: Deactivated successfully. Jul 10 23:35:13.216377 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 23:35:13.218300 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Jul 10 23:35:13.219318 systemd-logind[1478]: Removed session 14. Jul 10 23:35:17.718679 containerd[1506]: time="2025-07-10T23:35:17.718598218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e8f083c1f230f4f3781dfb899ffa75fe54765ea4e7a46f04a66dae12cd244b6\" id:\"8939c217c1b9ac5a227be02a37421d6104f4c44ef2879a12841d4179709277fd\" pid:5520 exited_at:{seconds:1752190517 nanos:718226641}" Jul 10 23:35:18.222390 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:60896.service - OpenSSH per-connection server daemon (10.0.0.1:60896). Jul 10 23:35:18.343606 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 60896 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:18.345581 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:18.355108 systemd-logind[1478]: New session 15 of user core. Jul 10 23:35:18.364972 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 23:35:18.594343 sshd[5535]: Connection closed by 10.0.0.1 port 60896 Jul 10 23:35:18.594794 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:18.606748 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:60896.service: Deactivated successfully. Jul 10 23:35:18.609895 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 23:35:18.612868 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Jul 10 23:35:18.615511 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:60904.service - OpenSSH per-connection server daemon (10.0.0.1:60904). Jul 10 23:35:18.617392 systemd-logind[1478]: Removed session 15. Jul 10 23:35:18.677238 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 60904 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:18.678796 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:18.683720 systemd-logind[1478]: New session 16 of user core. Jul 10 23:35:18.695876 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 23:35:18.942176 sshd[5551]: Connection closed by 10.0.0.1 port 60904 Jul 10 23:35:18.942443 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:18.955542 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:60904.service: Deactivated successfully. Jul 10 23:35:18.957853 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 23:35:18.959271 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Jul 10 23:35:18.962293 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:60914.service - OpenSSH per-connection server daemon (10.0.0.1:60914). Jul 10 23:35:18.963829 systemd-logind[1478]: Removed session 16. Jul 10 23:35:19.045571 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 60914 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:19.047269 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:19.052724 systemd-logind[1478]: New session 17 of user core. Jul 10 23:35:19.061931 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 23:35:20.006752 sshd[5565]: Connection closed by 10.0.0.1 port 60914 Jul 10 23:35:20.007411 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:20.017711 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:60914.service: Deactivated successfully. Jul 10 23:35:20.020306 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 23:35:20.024997 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Jul 10 23:35:20.028139 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:60924.service - OpenSSH per-connection server daemon (10.0.0.1:60924). Jul 10 23:35:20.029480 systemd-logind[1478]: Removed session 17. Jul 10 23:35:20.092029 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 60924 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:20.093559 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:20.098191 systemd-logind[1478]: New session 18 of user core. Jul 10 23:35:20.105832 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 23:35:20.448189 sshd[5586]: Connection closed by 10.0.0.1 port 60924 Jul 10 23:35:20.447835 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:20.458141 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:60924.service: Deactivated successfully. Jul 10 23:35:20.460225 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 23:35:20.465500 systemd-logind[1478]: Session 18 logged out. Waiting for processes to exit. Jul 10 23:35:20.468763 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:60938.service - OpenSSH per-connection server daemon (10.0.0.1:60938). Jul 10 23:35:20.471087 systemd-logind[1478]: Removed session 18. Jul 10 23:35:20.536107 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 60938 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:20.537595 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:20.541924 systemd-logind[1478]: New session 19 of user core. Jul 10 23:35:20.552268 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 23:35:20.739031 sshd[5601]: Connection closed by 10.0.0.1 port 60938 Jul 10 23:35:20.739572 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:20.744518 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:60938.service: Deactivated successfully. Jul 10 23:35:20.748205 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 23:35:20.749757 systemd-logind[1478]: Session 19 logged out. Waiting for processes to exit. Jul 10 23:35:20.754325 systemd-logind[1478]: Removed session 19. Jul 10 23:35:25.753051 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:44730.service - OpenSSH per-connection server daemon (10.0.0.1:44730). Jul 10 23:35:25.814756 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 44730 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:25.816089 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:25.820062 systemd-logind[1478]: New session 20 of user core. Jul 10 23:35:25.837554 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 23:35:26.006873 sshd[5626]: Connection closed by 10.0.0.1 port 44730 Jul 10 23:35:26.007594 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:26.011555 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:44730.service: Deactivated successfully. Jul 10 23:35:26.014845 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 23:35:26.016965 systemd-logind[1478]: Session 20 logged out. Waiting for processes to exit. Jul 10 23:35:26.018810 systemd-logind[1478]: Removed session 20. Jul 10 23:35:28.903161 kubelet[2659]: E0710 23:35:28.903122 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:35:31.024055 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:44738.service - OpenSSH per-connection server daemon (10.0.0.1:44738). Jul 10 23:35:31.100956 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 44738 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:31.106416 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:31.114060 systemd-logind[1478]: New session 21 of user core. Jul 10 23:35:31.123888 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 23:35:31.316319 sshd[5641]: Connection closed by 10.0.0.1 port 44738 Jul 10 23:35:31.316725 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:31.323830 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:44738.service: Deactivated successfully. Jul 10 23:35:31.326250 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 23:35:31.327865 systemd-logind[1478]: Session 21 logged out. Waiting for processes to exit. Jul 10 23:35:31.329738 systemd-logind[1478]: Removed session 21. Jul 10 23:35:32.739349 kubelet[2659]: I0710 23:35:32.738937 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:35:32.785113 containerd[1506]: time="2025-07-10T23:35:32.784930391Z" level=info msg="StopContainer for \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" with timeout 30 (s)" Jul 10 23:35:32.788008 containerd[1506]: time="2025-07-10T23:35:32.787865124Z" level=info msg="Stop container \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" with signal terminated" Jul 10 23:35:32.815741 systemd[1]: cri-containerd-905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9.scope: Deactivated successfully. Jul 10 23:35:32.817836 systemd[1]: cri-containerd-905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9.scope: Consumed 1.550s CPU time, 43.9M memory peak, 1M read from disk. Jul 10 23:35:32.823219 containerd[1506]: time="2025-07-10T23:35:32.823064601Z" level=info msg="received exit event container_id:\"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" id:\"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" pid:5145 exit_status:1 exited_at:{seconds:1752190532 nanos:822710330}" Jul 10 23:35:32.823219 containerd[1506]: time="2025-07-10T23:35:32.823162159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" id:\"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" pid:5145 exit_status:1 exited_at:{seconds:1752190532 nanos:822710330}" Jul 10 23:35:32.855248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9-rootfs.mount: Deactivated successfully. Jul 10 23:35:32.875295 containerd[1506]: time="2025-07-10T23:35:32.875129294Z" level=info msg="StopContainer for \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" returns successfully" Jul 10 23:35:32.878025 containerd[1506]: time="2025-07-10T23:35:32.877970149Z" level=info msg="StopPodSandbox for \"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\"" Jul 10 23:35:32.883904 containerd[1506]: time="2025-07-10T23:35:32.883856135Z" level=info msg="Container to stop \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:35:32.900681 kubelet[2659]: E0710 23:35:32.900647 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:35:32.905109 systemd[1]: cri-containerd-dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b.scope: Deactivated successfully. Jul 10 23:35:32.910320 containerd[1506]: time="2025-07-10T23:35:32.910276092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\" id:\"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\" pid:4623 exit_status:137 exited_at:{seconds:1752190532 nanos:909227036}" Jul 10 23:35:32.950044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b-rootfs.mount: Deactivated successfully. Jul 10 23:35:32.966724 containerd[1506]: time="2025-07-10T23:35:32.957956885Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Jul 10 23:35:32.966913 containerd[1506]: time="2025-07-10T23:35:32.966893041Z" level=info msg="shim disconnected" id=dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b namespace=k8s.io Jul 10 23:35:32.966975 containerd[1506]: time="2025-07-10T23:35:32.966962879Z" level=warning msg="cleaning up after shim disconnected" id=dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b namespace=k8s.io Jul 10 23:35:32.967034 containerd[1506]: time="2025-07-10T23:35:32.967022318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:35:32.995772 containerd[1506]: time="2025-07-10T23:35:32.994710646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7\" id:\"5034f9888f8770457801b556e0982ff33a6cb0ac7c01c080a448f78bb03961ed\" pid:5690 exited_at:{seconds:1752190532 nanos:973386133}" Jul 10 23:35:32.997228 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b-shm.mount: Deactivated successfully. Jul 10 23:35:33.002335 containerd[1506]: time="2025-07-10T23:35:33.000833066Z" level=info msg="received exit event sandbox_id:\"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\" exit_status:137 exited_at:{seconds:1752190532 nanos:909227036}" Jul 10 23:35:33.082424 systemd-networkd[1440]: cali4c62c565305: Link DOWN Jul 10 23:35:33.082431 systemd-networkd[1440]: cali4c62c565305: Lost carrier Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.080 [INFO][5749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.080 [INFO][5749] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" iface="eth0" netns="/var/run/netns/cni-855a98fd-59c7-f40c-c0d8-17ad1ec1203b" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.080 [INFO][5749] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" iface="eth0" netns="/var/run/netns/cni-855a98fd-59c7-f40c-c0d8-17ad1ec1203b" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.090 [INFO][5749] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" after=10.131949ms iface="eth0" netns="/var/run/netns/cni-855a98fd-59c7-f40c-c0d8-17ad1ec1203b" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.090 [INFO][5749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.090 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.121 [INFO][5763] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" HandleID="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.121 [INFO][5763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.121 [INFO][5763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.182 [INFO][5763] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" HandleID="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.182 [INFO][5763] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" HandleID="k8s-pod-network.dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Workload="localhost-k8s-calico--apiserver--7bcf67d85c--cg7hx-eth0" Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.184 [INFO][5763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 23:35:33.192294 containerd[1506]: 2025-07-10 23:35:33.188 [INFO][5749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b" Jul 10 23:35:33.194008 containerd[1506]: time="2025-07-10T23:35:33.193972078Z" level=info msg="TearDown network for sandbox \"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\" successfully" Jul 10 23:35:33.194008 containerd[1506]: time="2025-07-10T23:35:33.194008237Z" level=info msg="StopPodSandbox for \"dca0bc31a5a5ca0e86def6cabe1ddeced5dfb9b34847361783162f4f42162a5b\" returns successfully" Jul 10 23:35:33.194826 systemd[1]: run-netns-cni\x2d855a98fd\x2d59c7\x2df40c\x2dc0d8\x2d17ad1ec1203b.mount: Deactivated successfully. Jul 10 23:35:33.269555 kubelet[2659]: I0710 23:35:33.269407 2659 scope.go:117] "RemoveContainer" containerID="905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9" Jul 10 23:35:33.272184 containerd[1506]: time="2025-07-10T23:35:33.272148207Z" level=info msg="RemoveContainer for \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\"" Jul 10 23:35:33.277878 containerd[1506]: time="2025-07-10T23:35:33.277831889Z" level=info msg="RemoveContainer for \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" returns successfully" Jul 10 23:35:33.278196 kubelet[2659]: I0710 23:35:33.278130 2659 scope.go:117] "RemoveContainer" containerID="905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9" Jul 10 23:35:33.278404 containerd[1506]: time="2025-07-10T23:35:33.278359078Z" level=error msg="ContainerStatus for \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\": not found" Jul 10 23:35:33.280392 kubelet[2659]: E0710 23:35:33.280344 2659 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\": not found" containerID="905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9" Jul 10 23:35:33.280472 kubelet[2659]: I0710 23:35:33.280397 2659 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9"} err="failed to get container status \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"905d5dd91c6cba07b5188fcb300ec4ff5007a6bdc14f306c518ecfee67d0b6f9\": not found" Jul 10 23:35:33.365898 kubelet[2659]: I0710 23:35:33.365848 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7cjj\" (UniqueName: \"kubernetes.io/projected/1aa67573-c049-4a50-9fad-b9f78cda01f2-kube-api-access-q7cjj\") pod \"1aa67573-c049-4a50-9fad-b9f78cda01f2\" (UID: \"1aa67573-c049-4a50-9fad-b9f78cda01f2\") " Jul 10 23:35:33.366039 kubelet[2659]: I0710 23:35:33.365931 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1aa67573-c049-4a50-9fad-b9f78cda01f2-calico-apiserver-certs\") pod \"1aa67573-c049-4a50-9fad-b9f78cda01f2\" (UID: \"1aa67573-c049-4a50-9fad-b9f78cda01f2\") " Jul 10 23:35:33.370615 systemd[1]: var-lib-kubelet-pods-1aa67573\x2dc049\x2d4a50\x2d9fad\x2db9f78cda01f2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq7cjj.mount: Deactivated successfully. Jul 10 23:35:33.370740 systemd[1]: var-lib-kubelet-pods-1aa67573\x2dc049\x2d4a50\x2d9fad\x2db9f78cda01f2-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 23:35:33.371579 kubelet[2659]: I0710 23:35:33.370874 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aa67573-c049-4a50-9fad-b9f78cda01f2-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "1aa67573-c049-4a50-9fad-b9f78cda01f2" (UID: "1aa67573-c049-4a50-9fad-b9f78cda01f2"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 23:35:33.371579 kubelet[2659]: I0710 23:35:33.371290 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa67573-c049-4a50-9fad-b9f78cda01f2-kube-api-access-q7cjj" (OuterVolumeSpecName: "kube-api-access-q7cjj") pod "1aa67573-c049-4a50-9fad-b9f78cda01f2" (UID: "1aa67573-c049-4a50-9fad-b9f78cda01f2"). InnerVolumeSpecName "kube-api-access-q7cjj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:35:33.466590 kubelet[2659]: I0710 23:35:33.466544 2659 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1aa67573-c049-4a50-9fad-b9f78cda01f2-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 10 23:35:33.466590 kubelet[2659]: I0710 23:35:33.466580 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7cjj\" (UniqueName: \"kubernetes.io/projected/1aa67573-c049-4a50-9fad-b9f78cda01f2-kube-api-access-q7cjj\") on node \"localhost\" DevicePath \"\"" Jul 10 23:35:33.570441 systemd[1]: Removed slice kubepods-besteffort-pod1aa67573_c049_4a50_9fad_b9f78cda01f2.slice - libcontainer container kubepods-besteffort-pod1aa67573_c049_4a50_9fad_b9f78cda01f2.slice. Jul 10 23:35:33.570653 systemd[1]: kubepods-besteffort-pod1aa67573_c049_4a50_9fad_b9f78cda01f2.slice: Consumed 1.569s CPU time, 44.1M memory peak, 1M read from disk. Jul 10 23:35:34.681825 containerd[1506]: time="2025-07-10T23:35:34.681775704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0877fdcdbff79bbf22e9f7e4226a3749c1326b33359c546f103ebdda5820e2f7\" id:\"5a4ab9a03785e4027444bebe524cd1d1b603181584d91cc2e8f5409efe3d3968\" pid:5788 exited_at:{seconds:1752190534 nanos:681423151}" Jul 10 23:35:34.902002 kubelet[2659]: I0710 23:35:34.901955 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa67573-c049-4a50-9fad-b9f78cda01f2" path="/var/lib/kubelet/pods/1aa67573-c049-4a50-9fad-b9f78cda01f2/volumes" Jul 10 23:35:35.997943 containerd[1506]: time="2025-07-10T23:35:35.997800550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9a346d21d88ccad5c9ef9dc1b5e3eed0bba667ff4e10bddc319b984eae372c\" id:\"bc6fc5394f1f921bb7993466604c0e99daaff0a677336200905849f0ce5723bd\" pid:5812 exited_at:{seconds:1752190535 nanos:997455716}" Jul 10 23:35:36.332298 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:41494.service - OpenSSH per-connection server daemon (10.0.0.1:41494). Jul 10 23:35:36.410626 sshd[5823]: Accepted publickey for core from 10.0.0.1 port 41494 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:35:36.413171 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:36.417955 systemd-logind[1478]: New session 22 of user core. Jul 10 23:35:36.428797 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 23:35:36.662943 sshd[5825]: Connection closed by 10.0.0.1 port 41494 Jul 10 23:35:36.663211 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:36.667358 systemd-logind[1478]: Session 22 logged out. Waiting for processes to exit. Jul 10 23:35:36.667746 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:41494.service: Deactivated successfully. Jul 10 23:35:36.671437 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 23:35:36.675185 systemd-logind[1478]: Removed session 22.