Aug 13 00:10:33.931906 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:10:33.931929 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:10:33.931939 kernel: KASLR enabled Aug 13 00:10:33.931945 kernel: efi: EFI v2.7 by EDK II Aug 13 00:10:33.931950 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 13 00:10:33.931956 kernel: random: crng init done Aug 13 00:10:33.931963 kernel: ACPI: Early table checksum verification disabled Aug 13 00:10:33.931969 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 13 00:10:33.931975 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:10:33.931982 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.931988 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.931994 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.932000 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.932006 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.932013 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.932021 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.932028 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.932034 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:10:33.932040 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 13 00:10:33.932046 kernel: NUMA: Failed to initialise from firmware Aug 13 00:10:33.932053 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:10:33.932059 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 13 00:10:33.932065 kernel: Zone ranges: Aug 13 00:10:33.932072 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:10:33.932078 kernel: DMA32 empty Aug 13 00:10:33.932086 kernel: Normal empty Aug 13 00:10:33.932092 kernel: Movable zone start for each node Aug 13 00:10:33.932098 kernel: Early memory node ranges Aug 13 00:10:33.932104 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 13 00:10:33.932111 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 13 00:10:33.932117 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 13 00:10:33.932123 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 13 00:10:33.932129 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 13 00:10:33.932136 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 13 00:10:33.932142 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 13 00:10:33.932148 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:10:33.932154 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 13 00:10:33.932162 kernel: psci: probing for conduit method from ACPI. Aug 13 00:10:33.932168 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:10:33.932174 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:10:33.932183 kernel: psci: Trusted OS migration not required Aug 13 00:10:33.932190 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:10:33.932197 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:10:33.932206 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:10:33.932212 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:10:33.932219 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 13 00:10:33.932226 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:10:33.932232 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:10:33.932239 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:10:33.932245 kernel: CPU features: detected: Spectre-v4 Aug 13 00:10:33.932252 kernel: CPU features: detected: Spectre-BHB Aug 13 00:10:33.932259 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:10:33.932265 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:10:33.932273 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:10:33.932280 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:10:33.932290 kernel: alternatives: applying boot alternatives Aug 13 00:10:33.932297 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:10:33.932305 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:10:33.932311 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:10:33.932318 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:10:33.932325 kernel: Fallback order for Node 0: 0 Aug 13 00:10:33.932331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 13 00:10:33.932338 kernel: Policy zone: DMA Aug 13 00:10:33.932362 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:10:33.932372 kernel: software IO TLB: area num 4. Aug 13 00:10:33.932381 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 13 00:10:33.932389 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Aug 13 00:10:33.932395 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:10:33.932402 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:10:33.932411 kernel: rcu: RCU event tracing is enabled. Aug 13 00:10:33.932418 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:10:33.932425 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:10:33.932431 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:10:33.932438 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:10:33.932445 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:10:33.932453 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:10:33.932460 kernel: GICv3: 256 SPIs implemented Aug 13 00:10:33.932466 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:10:33.932473 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:10:33.932480 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:10:33.932486 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:10:33.932493 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:10:33.932500 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:10:33.932507 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:10:33.932513 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 13 00:10:33.932520 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 13 00:10:33.932527 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:10:33.932535 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:10:33.932542 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:10:33.932549 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:10:33.932556 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:10:33.932563 kernel: arm-pv: using stolen time PV Aug 13 00:10:33.932570 kernel: Console: colour dummy device 80x25 Aug 13 00:10:33.932577 kernel: ACPI: Core revision 20230628 Aug 13 00:10:33.932584 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:10:33.932590 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:10:33.932597 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:10:33.932605 kernel: landlock: Up and running. Aug 13 00:10:33.932612 kernel: SELinux: Initializing. Aug 13 00:10:33.932619 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:10:33.932627 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:10:33.932634 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:10:33.932641 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:10:33.932648 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:10:33.932654 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:10:33.932661 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:10:33.932669 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:10:33.932676 kernel: Remapping and enabling EFI services. Aug 13 00:10:33.932683 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:10:33.932690 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:10:33.932697 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:10:33.932704 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 13 00:10:33.932711 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:10:33.932723 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:10:33.932731 kernel: Detected PIPT I-cache on CPU2 Aug 13 00:10:33.932737 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 13 00:10:33.932746 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 13 00:10:33.932753 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:10:33.932765 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 13 00:10:33.932774 kernel: Detected PIPT I-cache on CPU3 Aug 13 00:10:33.932782 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 13 00:10:33.932789 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 13 00:10:33.932797 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:10:33.932804 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 13 00:10:33.932812 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:10:33.932821 kernel: SMP: Total of 4 processors activated. Aug 13 00:10:33.932828 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:10:33.932836 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:10:33.932843 kernel: CPU features: detected: Common not Private translations Aug 13 00:10:33.932850 kernel: CPU features: detected: CRC32 instructions Aug 13 00:10:33.932857 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 13 00:10:33.932864 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:10:33.932872 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:10:33.932880 kernel: CPU features: detected: Privileged Access Never Aug 13 00:10:33.932887 kernel: CPU features: detected: RAS Extension Support Aug 13 00:10:33.932895 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:10:33.932903 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:10:33.932910 kernel: alternatives: applying system-wide alternatives Aug 13 00:10:33.932917 kernel: devtmpfs: initialized Aug 13 00:10:33.932925 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:10:33.932932 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:10:33.932939 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:10:33.932948 kernel: SMBIOS 3.0.0 present. Aug 13 00:10:33.932955 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 13 00:10:33.932962 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:10:33.932970 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:10:33.932977 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:10:33.932984 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:10:33.932992 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:10:33.932999 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Aug 13 00:10:33.933006 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:10:33.933015 kernel: cpuidle: using governor menu Aug 13 00:10:33.933022 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:10:33.933029 kernel: ASID allocator initialised with 32768 entries Aug 13 00:10:33.933037 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:10:33.933044 kernel: Serial: AMBA PL011 UART driver Aug 13 00:10:33.933051 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:10:33.933058 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:10:33.933066 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:10:33.933073 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:10:33.933082 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:10:33.933089 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:10:33.933096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:10:33.933104 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:10:33.933111 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:10:33.933118 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:10:33.933125 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:10:33.933133 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:10:33.933140 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:10:33.933149 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:10:33.933156 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:10:33.933163 kernel: ACPI: Interpreter enabled Aug 13 00:10:33.933170 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:10:33.933178 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:10:33.933185 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:10:33.933192 kernel: printk: console [ttyAMA0] enabled Aug 13 00:10:33.933199 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:10:33.933441 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:10:33.933539 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:10:33.933609 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:10:33.933676 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:10:33.933755 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:10:33.933766 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:10:33.933774 kernel: PCI host bridge to bus 0000:00 Aug 13 00:10:33.933847 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:10:33.933911 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:10:33.933969 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:10:33.934027 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:10:33.934108 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:10:33.934185 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:10:33.934253 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 13 00:10:33.934321 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 13 00:10:33.934398 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:10:33.934466 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:10:33.934530 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 13 00:10:33.934596 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 13 00:10:33.934655 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:10:33.934714 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:10:33.934790 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:10:33.934800 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:10:33.934807 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:10:33.934815 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:10:33.934822 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:10:33.934830 kernel: iommu: Default domain type: Translated Aug 13 00:10:33.934837 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:10:33.934845 kernel: efivars: Registered efivars operations Aug 13 00:10:33.934854 kernel: vgaarb: loaded Aug 13 00:10:33.934862 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:10:33.934869 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:10:33.934877 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:10:33.934884 kernel: pnp: PnP ACPI init Aug 13 00:10:33.934963 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:10:33.934974 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:10:33.934982 kernel: NET: Registered PF_INET protocol family Aug 13 00:10:33.934989 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:10:33.935001 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:10:33.935009 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:10:33.935017 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:10:33.935024 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:10:33.935032 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:10:33.935039 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:10:33.935047 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:10:33.935054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:10:33.935063 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:10:33.935071 kernel: kvm [1]: HYP mode not available Aug 13 00:10:33.935078 kernel: Initialise system trusted keyrings Aug 13 00:10:33.935086 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:10:33.935093 kernel: Key type asymmetric registered Aug 13 00:10:33.935101 kernel: Asymmetric key parser 'x509' registered Aug 13 00:10:33.935110 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:10:33.935117 kernel: io scheduler mq-deadline registered Aug 13 00:10:33.935125 kernel: io scheduler kyber registered Aug 13 00:10:33.935132 kernel: io scheduler bfq registered Aug 13 00:10:33.935142 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:10:33.935149 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:10:33.935157 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:10:33.935231 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 13 00:10:33.935241 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:10:33.935248 kernel: thunder_xcv, ver 1.0 Aug 13 00:10:33.935255 kernel: thunder_bgx, ver 1.0 Aug 13 00:10:33.935263 kernel: nicpf, ver 1.0 Aug 13 00:10:33.935270 kernel: nicvf, ver 1.0 Aug 13 00:10:33.935361 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:10:33.935442 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:10:33 UTC (1755043833) Aug 13 00:10:33.935458 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:10:33.935466 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:10:33.935474 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:10:33.935481 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:10:33.935489 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:10:33.935497 kernel: Segment Routing with IPv6 Aug 13 00:10:33.935507 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:10:33.935514 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:10:33.935522 kernel: Key type dns_resolver registered Aug 13 00:10:33.935530 kernel: registered taskstats version 1 Aug 13 00:10:33.935537 kernel: Loading compiled-in X.509 certificates Aug 13 00:10:33.935544 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:10:33.935552 kernel: Key type .fscrypt registered Aug 13 00:10:33.935560 kernel: Key type fscrypt-provisioning registered Aug 13 00:10:33.935567 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:10:33.935576 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:10:33.935584 kernel: ima: No architecture policies found Aug 13 00:10:33.935592 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:10:33.935599 kernel: clk: Disabling unused clocks Aug 13 00:10:33.935607 kernel: Freeing unused kernel memory: 39424K Aug 13 00:10:33.935614 kernel: Run /init as init process Aug 13 00:10:33.935621 kernel: with arguments: Aug 13 00:10:33.935629 kernel: /init Aug 13 00:10:33.935636 kernel: with environment: Aug 13 00:10:33.935644 kernel: HOME=/ Aug 13 00:10:33.935652 kernel: TERM=linux Aug 13 00:10:33.935659 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:10:33.935669 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:10:33.935678 systemd[1]: Detected virtualization kvm. Aug 13 00:10:33.935687 systemd[1]: Detected architecture arm64. Aug 13 00:10:33.935695 systemd[1]: Running in initrd. Aug 13 00:10:33.935704 systemd[1]: No hostname configured, using default hostname. Aug 13 00:10:33.935711 systemd[1]: Hostname set to . Aug 13 00:10:33.935728 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:10:33.935736 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:10:33.935745 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:10:33.935753 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:10:33.935762 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:10:33.935770 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:10:33.935780 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:10:33.935788 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:10:33.935798 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:10:33.935806 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:10:33.935814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:10:33.935823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:10:33.935831 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:10:33.935840 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:10:33.935849 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:10:33.935857 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:10:33.935864 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:10:33.935873 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:10:33.935881 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:10:33.935889 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:10:33.935897 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:10:33.935905 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:10:33.935915 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:10:33.935926 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:10:33.935934 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:10:33.935943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:10:33.935951 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:10:33.935959 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:10:33.935967 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:10:33.935975 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:10:33.935985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:10:33.935993 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:10:33.936002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:10:33.936010 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:10:33.936021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:10:33.936031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:10:33.936039 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:10:33.936074 systemd-journald[237]: Collecting audit messages is disabled. Aug 13 00:10:33.936095 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:10:33.936106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:10:33.936115 systemd-journald[237]: Journal started Aug 13 00:10:33.936134 systemd-journald[237]: Runtime Journal (/run/log/journal/2cc14acb52284724a9787c2bd2008973) is 5.9M, max 47.3M, 41.4M free. Aug 13 00:10:33.926248 systemd-modules-load[238]: Inserted module 'overlay' Aug 13 00:10:33.940011 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:10:33.940044 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:10:33.941104 systemd-modules-load[238]: Inserted module 'br_netfilter' Aug 13 00:10:33.941896 kernel: Bridge firewalling registered Aug 13 00:10:33.942255 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:10:33.950597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:10:33.952084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:10:33.953764 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:10:33.956708 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:10:33.960497 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:10:33.961397 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:10:33.969680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:10:33.972504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:10:33.976371 dracut-cmdline[274]: dracut-dracut-053 Aug 13 00:10:33.979576 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:10:34.015370 systemd-resolved[282]: Positive Trust Anchors: Aug 13 00:10:34.015387 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:10:34.015419 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:10:34.021356 systemd-resolved[282]: Defaulting to hostname 'linux'. Aug 13 00:10:34.022615 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:10:34.023515 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:10:34.057381 kernel: SCSI subsystem initialized Aug 13 00:10:34.062363 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:10:34.071378 kernel: iscsi: registered transport (tcp) Aug 13 00:10:34.084368 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:10:34.084396 kernel: QLogic iSCSI HBA Driver Aug 13 00:10:34.135114 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:10:34.148523 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:10:34.164503 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:10:34.164570 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:10:34.165367 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:10:34.216379 kernel: raid6: neonx8 gen() 15742 MB/s Aug 13 00:10:34.233380 kernel: raid6: neonx4 gen() 15647 MB/s Aug 13 00:10:34.250362 kernel: raid6: neonx2 gen() 13180 MB/s Aug 13 00:10:34.267363 kernel: raid6: neonx1 gen() 10472 MB/s Aug 13 00:10:34.284376 kernel: raid6: int64x8 gen() 6959 MB/s Aug 13 00:10:34.301372 kernel: raid6: int64x4 gen() 7327 MB/s Aug 13 00:10:34.318375 kernel: raid6: int64x2 gen() 6128 MB/s Aug 13 00:10:34.335367 kernel: raid6: int64x1 gen() 5055 MB/s Aug 13 00:10:34.335386 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Aug 13 00:10:34.352376 kernel: raid6: .... xor() 11925 MB/s, rmw enabled Aug 13 00:10:34.352407 kernel: raid6: using neon recovery algorithm Aug 13 00:10:34.357564 kernel: xor: measuring software checksum speed Aug 13 00:10:34.357588 kernel: 8regs : 19769 MB/sec Aug 13 00:10:34.358647 kernel: 32regs : 19285 MB/sec Aug 13 00:10:34.358658 kernel: arm64_neon : 26839 MB/sec Aug 13 00:10:34.358668 kernel: xor: using function: arm64_neon (26839 MB/sec) Aug 13 00:10:34.413374 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:10:34.427411 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:10:34.438977 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:10:34.453752 systemd-udevd[460]: Using default interface naming scheme 'v255'. Aug 13 00:10:34.457091 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:10:34.466544 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:10:34.481483 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Aug 13 00:10:34.512374 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:10:34.524535 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:10:34.569828 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:10:34.579562 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:10:34.593422 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:10:34.595195 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:10:34.596364 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:10:34.598312 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:10:34.607538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:10:34.617777 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:10:34.624381 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 13 00:10:34.627120 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:10:34.631209 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:10:34.631355 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:10:34.634735 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:10:34.635606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:10:34.641642 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:10:34.641667 kernel: GPT:9289727 != 19775487 Aug 13 00:10:34.641676 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:10:34.641686 kernel: GPT:9289727 != 19775487 Aug 13 00:10:34.641702 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:10:34.641713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:10:34.635785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:10:34.640649 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:10:34.648077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:10:34.663256 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (522) Aug 13 00:10:34.663312 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (503) Aug 13 00:10:34.665932 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:10:34.668458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:10:34.673314 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:10:34.683301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:10:34.686934 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:10:34.687868 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:10:34.701534 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:10:34.703142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:10:34.708110 disk-uuid[550]: Primary Header is updated. Aug 13 00:10:34.708110 disk-uuid[550]: Secondary Entries is updated. Aug 13 00:10:34.708110 disk-uuid[550]: Secondary Header is updated. Aug 13 00:10:34.711374 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:10:34.733957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:10:35.725367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:10:35.725422 disk-uuid[551]: The operation has completed successfully. Aug 13 00:10:35.746666 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:10:35.746774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:10:35.769539 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:10:35.772615 sh[573]: Success Aug 13 00:10:35.789133 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:10:35.830862 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:10:35.832424 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:10:35.833180 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:10:35.843990 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:10:35.844036 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:10:35.844055 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:10:35.844813 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:10:35.845858 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:10:35.849750 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:10:35.850908 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:10:35.861496 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:10:35.862880 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:10:35.869906 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:10:35.870008 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:10:35.870019 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:10:35.872410 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:10:35.880869 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:10:35.883050 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:10:35.888194 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:10:35.894557 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:10:35.964753 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:10:35.977536 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:10:35.992143 ignition[665]: Ignition 2.19.0 Aug 13 00:10:35.992920 ignition[665]: Stage: fetch-offline Aug 13 00:10:35.993499 ignition[665]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:10:35.994177 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:10:35.995134 ignition[665]: parsed url from cmdline: "" Aug 13 00:10:35.995138 ignition[665]: no config URL provided Aug 13 00:10:35.995143 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:10:35.995158 ignition[665]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:10:35.995181 ignition[665]: op(1): [started] loading QEMU firmware config module Aug 13 00:10:35.995186 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:10:35.998820 systemd-networkd[764]: lo: Link UP Aug 13 00:10:35.998828 systemd-networkd[764]: lo: Gained carrier Aug 13 00:10:35.999604 systemd-networkd[764]: Enumeration completed Aug 13 00:10:36.000502 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:10:36.000506 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:10:36.001408 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:10:36.002696 ignition[665]: op(1): [finished] loading QEMU firmware config module Aug 13 00:10:36.003250 systemd[1]: Reached target network.target - Network. Aug 13 00:10:36.005209 systemd-networkd[764]: eth0: Link UP Aug 13 00:10:36.005213 systemd-networkd[764]: eth0: Gained carrier Aug 13 00:10:36.005221 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:10:36.017438 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:10:36.046858 ignition[665]: parsing config with SHA512: 024e770148f9f2150bef9da79f4c989cb3fa2f95a393c9d77d95b5ab27de4d8dd7ad9806b90f97cf65e0fbaba0c65539249691aae1b3024c77c98c396be34bed Aug 13 00:10:36.050936 unknown[665]: fetched base config from "system" Aug 13 00:10:36.050946 unknown[665]: fetched user config from "qemu" Aug 13 00:10:36.051323 ignition[665]: fetch-offline: fetch-offline passed Aug 13 00:10:36.051398 ignition[665]: Ignition finished successfully Aug 13 00:10:36.052856 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:10:36.054470 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:10:36.063517 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:10:36.073532 ignition[771]: Ignition 2.19.0 Aug 13 00:10:36.073541 ignition[771]: Stage: kargs Aug 13 00:10:36.073733 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:10:36.073744 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:10:36.076785 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:10:36.074617 ignition[771]: kargs: kargs passed Aug 13 00:10:36.074661 ignition[771]: Ignition finished successfully Aug 13 00:10:36.084532 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:10:36.093969 ignition[779]: Ignition 2.19.0 Aug 13 00:10:36.093978 ignition[779]: Stage: disks Aug 13 00:10:36.094144 ignition[779]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:10:36.094153 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:10:36.095029 ignition[779]: disks: disks passed Aug 13 00:10:36.096510 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:10:36.095074 ignition[779]: Ignition finished successfully Aug 13 00:10:36.099573 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:10:36.100366 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:10:36.101858 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:10:36.103265 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:10:36.104549 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:10:36.112514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:10:36.123427 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:10:36.127097 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:10:36.139440 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:10:36.187189 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:10:36.188486 kernel: EXT4-fs (vda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:10:36.188372 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:10:36.199443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:10:36.201468 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:10:36.202508 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:10:36.202550 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:10:36.202574 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:10:36.208252 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:10:36.210024 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:10:36.214175 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (797) Aug 13 00:10:36.214210 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:10:36.214221 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:10:36.214230 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:10:36.217369 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:10:36.218240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:10:36.256072 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:10:36.259704 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:10:36.263643 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:10:36.267440 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:10:36.343879 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:10:36.357490 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:10:36.360588 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:10:36.363363 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:10:36.384696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:10:36.394151 ignition[913]: INFO : Ignition 2.19.0 Aug 13 00:10:36.394151 ignition[913]: INFO : Stage: mount Aug 13 00:10:36.396287 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:10:36.396287 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:10:36.396287 ignition[913]: INFO : mount: mount passed Aug 13 00:10:36.396287 ignition[913]: INFO : Ignition finished successfully Aug 13 00:10:36.398031 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:10:36.400003 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:10:36.843281 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:10:36.852585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:10:36.858879 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Aug 13 00:10:36.858928 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:10:36.858940 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:10:36.860351 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:10:36.862370 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:10:36.863232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:10:36.879510 ignition[942]: INFO : Ignition 2.19.0 Aug 13 00:10:36.879510 ignition[942]: INFO : Stage: files Aug 13 00:10:36.880799 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:10:36.880799 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:10:36.880799 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:10:36.884427 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:10:36.884427 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:10:36.887262 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:10:36.888304 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:10:36.888304 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:10:36.887770 unknown[942]: wrote ssh authorized keys file for user: core Aug 13 00:10:36.891200 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 13 00:10:36.891200 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Aug 13 00:10:36.932722 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:10:37.249757 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:10:37.251550 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Aug 13 00:10:37.725426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:10:37.998456 systemd-networkd[764]: eth0: Gained IPv6LL Aug 13 00:10:38.257457 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:10:38.257457 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:10:38.261461 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:10:38.261461 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:10:38.261461 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:10:38.261461 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 00:10:38.261461 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:10:38.261461 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:10:38.261461 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 00:10:38.261461 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:10:38.279897 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:10:38.283601 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:10:38.286102 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:10:38.286102 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:10:38.286102 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:10:38.286102 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:10:38.286102 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:10:38.286102 ignition[942]: INFO : files: files passed Aug 13 00:10:38.286102 ignition[942]: INFO : Ignition finished successfully Aug 13 00:10:38.286905 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:10:38.299506 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:10:38.301166 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:10:38.302492 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:10:38.302571 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:10:38.308588 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 00:10:38.312016 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:10:38.312016 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:10:38.315334 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:10:38.316956 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:10:38.321739 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:10:38.335618 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:10:38.354625 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:10:38.354769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:10:38.356507 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:10:38.358001 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:10:38.359447 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:10:38.360245 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:10:38.380412 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:10:38.391564 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:10:38.400128 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:10:38.401104 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:10:38.408658 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:10:38.413839 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:10:38.413975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:10:38.415138 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:10:38.415982 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:10:38.419411 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:10:38.421451 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:10:38.422338 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:10:38.423192 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:10:38.427313 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:10:38.429196 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:10:38.431084 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:10:38.432484 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:10:38.433994 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:10:38.434134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:10:38.435977 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:10:38.437206 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:10:38.438531 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:10:38.439484 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:10:38.440790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:10:38.440911 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:10:38.442827 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:10:38.442951 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:10:38.444594 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:10:38.445698 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:10:38.446530 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:10:38.447978 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:10:38.449318 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:10:38.450445 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:10:38.450531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:10:38.451761 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:10:38.451841 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:10:38.453385 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:10:38.453491 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:10:38.454740 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:10:38.454833 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:10:38.466308 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:10:38.467018 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:10:38.467137 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:10:38.469220 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:10:38.470397 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:10:38.470525 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:10:38.471950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:10:38.472046 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:10:38.478003 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:10:38.478186 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:10:38.484775 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:10:38.485862 ignition[997]: INFO : Ignition 2.19.0 Aug 13 00:10:38.485862 ignition[997]: INFO : Stage: umount Aug 13 00:10:38.485862 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:10:38.485862 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:10:38.489001 ignition[997]: INFO : umount: umount passed Aug 13 00:10:38.489001 ignition[997]: INFO : Ignition finished successfully Aug 13 00:10:38.489801 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:10:38.490592 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:10:38.492578 systemd[1]: Stopped target network.target - Network. Aug 13 00:10:38.494029 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:10:38.494107 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:10:38.495585 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:10:38.495629 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:10:38.498288 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:10:38.498339 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:10:38.499718 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:10:38.499764 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:10:38.501169 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:10:38.502955 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:10:38.504358 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:10:38.504464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:10:38.506143 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:10:38.506204 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:10:38.508466 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:10:38.508583 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:10:38.511443 systemd-networkd[764]: eth0: DHCPv6 lease lost Aug 13 00:10:38.511870 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:10:38.511938 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:10:38.513400 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:10:38.513523 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:10:38.515098 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:10:38.515148 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:10:38.531473 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:10:38.532129 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:10:38.532191 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:10:38.533587 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:10:38.533626 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:10:38.536315 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:10:38.536371 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:10:38.538119 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:10:38.548006 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:10:38.548427 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:10:38.558064 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:10:38.558218 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:10:38.560030 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:10:38.560068 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:10:38.561436 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:10:38.561467 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:10:38.562764 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:10:38.562810 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:10:38.564840 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:10:38.564888 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:10:38.567007 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:10:38.567053 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:10:38.581536 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:10:38.582457 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:10:38.582515 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:10:38.584188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:10:38.584231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:10:38.587415 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:10:38.587512 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:10:38.589043 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:10:38.591034 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:10:38.602803 systemd[1]: Switching root. Aug 13 00:10:38.628273 systemd-journald[237]: Journal stopped Aug 13 00:10:39.340823 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Aug 13 00:10:39.340910 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:10:39.340926 kernel: SELinux: policy capability open_perms=1 Aug 13 00:10:39.340936 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:10:39.340946 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:10:39.340955 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:10:39.340969 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:10:39.340982 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:10:39.340991 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:10:39.341001 kernel: audit: type=1403 audit(1755043838.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:10:39.341011 systemd[1]: Successfully loaded SELinux policy in 41.418ms. Aug 13 00:10:39.341031 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.122ms. Aug 13 00:10:39.341043 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:10:39.341056 systemd[1]: Detected virtualization kvm. Aug 13 00:10:39.341068 systemd[1]: Detected architecture arm64. Aug 13 00:10:39.341078 systemd[1]: Detected first boot. Aug 13 00:10:39.341089 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:10:39.341099 zram_generator::config[1042]: No configuration found. Aug 13 00:10:39.341111 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:10:39.341121 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:10:39.341131 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:10:39.341142 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:10:39.341155 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:10:39.341166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:10:39.341176 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:10:39.341187 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:10:39.341197 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:10:39.341208 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:10:39.341223 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:10:39.341234 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:10:39.341245 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:10:39.341257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:10:39.341269 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:10:39.341279 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:10:39.341290 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:10:39.341300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:10:39.341310 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:10:39.341321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:10:39.341332 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:10:39.341375 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:10:39.341394 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:10:39.341405 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:10:39.341416 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:10:39.341426 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:10:39.341437 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:10:39.341448 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:10:39.341458 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:10:39.341469 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:10:39.341481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:10:39.341493 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:10:39.341504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:10:39.341514 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:10:39.341524 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:10:39.341535 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:10:39.341545 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:10:39.341555 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:10:39.341566 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:10:39.341577 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:10:39.341589 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:10:39.341604 systemd[1]: Reached target machines.target - Containers. Aug 13 00:10:39.341614 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:10:39.341625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:10:39.341635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:10:39.341645 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:10:39.341656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:10:39.341668 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:10:39.341679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:10:39.341689 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:10:39.341700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:10:39.341718 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:10:39.341730 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:10:39.341741 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:10:39.341751 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:10:39.341762 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:10:39.341775 kernel: fuse: init (API version 7.39) Aug 13 00:10:39.341785 kernel: loop: module loaded Aug 13 00:10:39.341794 kernel: ACPI: bus type drm_connector registered Aug 13 00:10:39.341804 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:10:39.341815 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:10:39.341825 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:10:39.341836 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:10:39.341846 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:10:39.341882 systemd-journald[1106]: Collecting audit messages is disabled. Aug 13 00:10:39.341908 systemd-journald[1106]: Journal started Aug 13 00:10:39.341929 systemd-journald[1106]: Runtime Journal (/run/log/journal/2cc14acb52284724a9787c2bd2008973) is 5.9M, max 47.3M, 41.4M free. Aug 13 00:10:39.157910 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:10:39.176192 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:10:39.176573 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:10:39.343499 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:10:39.343541 systemd[1]: Stopped verity-setup.service. Aug 13 00:10:39.346941 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:10:39.347576 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:10:39.348533 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:10:39.349507 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:10:39.350323 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:10:39.351272 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:10:39.352287 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:10:39.354383 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:10:39.355563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:10:39.356823 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:10:39.356974 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:10:39.358100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:10:39.358238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:10:39.359438 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:10:39.359571 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:10:39.360579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:10:39.360723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:10:39.362754 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:10:39.362900 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:10:39.364287 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:10:39.364447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:10:39.366771 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:10:39.368015 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:10:39.369493 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:10:39.383541 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:10:39.396575 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:10:39.399060 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:10:39.400734 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:10:39.400853 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:10:39.403115 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:10:39.406784 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:10:39.408881 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:10:39.410290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:10:39.412068 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:10:39.414369 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:10:39.415750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:10:39.419769 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:10:39.421696 systemd-journald[1106]: Time spent on flushing to /var/log/journal/2cc14acb52284724a9787c2bd2008973 is 16.949ms for 849 entries. Aug 13 00:10:39.421696 systemd-journald[1106]: System Journal (/var/log/journal/2cc14acb52284724a9787c2bd2008973) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:10:39.443715 systemd-journald[1106]: Received client request to flush runtime journal. Aug 13 00:10:39.421760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:10:39.423509 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:10:39.428629 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:10:39.431573 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:10:39.437925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:10:39.439270 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:10:39.440395 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:10:39.441639 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:10:39.442958 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:10:39.447028 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:10:39.450977 kernel: loop0: detected capacity change from 0 to 114432 Aug 13 00:10:39.450313 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:10:39.459612 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:10:39.463500 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:10:39.464647 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:10:39.466408 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:10:39.475311 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:10:39.484171 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:10:39.485198 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:10:39.494373 kernel: loop1: detected capacity change from 0 to 211168 Aug 13 00:10:39.492820 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:10:39.503433 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:10:39.524990 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Aug 13 00:10:39.525006 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Aug 13 00:10:39.526365 kernel: loop2: detected capacity change from 0 to 114328 Aug 13 00:10:39.530664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:10:39.573371 kernel: loop3: detected capacity change from 0 to 114432 Aug 13 00:10:39.578359 kernel: loop4: detected capacity change from 0 to 211168 Aug 13 00:10:39.587390 kernel: loop5: detected capacity change from 0 to 114328 Aug 13 00:10:39.591797 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 00:10:39.592239 (sd-merge)[1179]: Merged extensions into '/usr'. Aug 13 00:10:39.596314 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:10:39.596459 systemd[1]: Reloading... Aug 13 00:10:39.663507 zram_generator::config[1205]: No configuration found. Aug 13 00:10:39.743719 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:10:39.765856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:10:39.801738 systemd[1]: Reloading finished in 204 ms. Aug 13 00:10:39.834950 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:10:39.837885 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:10:39.851560 systemd[1]: Starting ensure-sysext.service... Aug 13 00:10:39.853314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:10:39.866966 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:10:39.866982 systemd[1]: Reloading... Aug 13 00:10:39.874380 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:10:39.875025 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:10:39.875823 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:10:39.876188 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Aug 13 00:10:39.876334 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Aug 13 00:10:39.879329 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:10:39.879458 systemd-tmpfiles[1241]: Skipping /boot Aug 13 00:10:39.886990 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:10:39.887101 systemd-tmpfiles[1241]: Skipping /boot Aug 13 00:10:39.920382 zram_generator::config[1271]: No configuration found. Aug 13 00:10:40.003176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:10:40.041804 systemd[1]: Reloading finished in 174 ms. Aug 13 00:10:40.058107 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:10:40.075852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:10:40.084093 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:10:40.086872 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:10:40.089097 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:10:40.092739 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:10:40.096540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:10:40.100537 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:10:40.108293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:10:40.109734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:10:40.111854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:10:40.117671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:10:40.118612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:10:40.122746 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:10:40.124766 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:10:40.124924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:10:40.126683 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:10:40.131604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:10:40.131771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:10:40.132974 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Aug 13 00:10:40.138309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:10:40.140061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:10:40.144636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:10:40.156639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:10:40.157482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:10:40.157581 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:10:40.161615 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:10:40.162998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:10:40.166191 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:10:40.167584 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:10:40.169365 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:10:40.169502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:10:40.170639 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:10:40.194838 systemd[1]: Finished ensure-sysext.service. Aug 13 00:10:40.198563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:10:40.204040 augenrules[1361]: No rules Aug 13 00:10:40.206641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:10:40.209165 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:10:40.219324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:10:40.222659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:10:40.223553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:10:40.225195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:10:40.231522 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:10:40.232579 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:10:40.233227 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:10:40.234576 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:10:40.236739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:10:40.236876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:10:40.237976 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:10:40.238112 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:10:40.238423 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1353) Aug 13 00:10:40.239226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:10:40.239383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:10:40.240531 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:10:40.240661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:10:40.255656 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 13 00:10:40.263898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:10:40.270403 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:10:40.271206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:10:40.271272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:10:40.303148 systemd-resolved[1308]: Positive Trust Anchors: Aug 13 00:10:40.303489 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:10:40.303578 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:10:40.306008 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:10:40.318722 systemd-resolved[1308]: Defaulting to hostname 'linux'. Aug 13 00:10:40.325068 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:10:40.327139 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:10:40.327522 systemd-networkd[1374]: lo: Link UP Aug 13 00:10:40.327531 systemd-networkd[1374]: lo: Gained carrier Aug 13 00:10:40.328505 systemd-networkd[1374]: Enumeration completed Aug 13 00:10:40.328578 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:10:40.329773 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:10:40.329782 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:10:40.329823 systemd[1]: Reached target network.target - Network. Aug 13 00:10:40.330714 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:10:40.331953 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:10:40.332028 systemd-networkd[1374]: eth0: Link UP Aug 13 00:10:40.332035 systemd-networkd[1374]: eth0: Gained carrier Aug 13 00:10:40.332049 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:10:40.339899 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:10:40.344417 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:10:40.345123 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Aug 13 00:10:40.346141 systemd-timesyncd[1375]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:10:40.346200 systemd-timesyncd[1375]: Initial clock synchronization to Wed 2025-08-13 00:10:40.641047 UTC. Aug 13 00:10:40.364641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:10:40.375477 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:10:40.377923 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:10:40.394015 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:10:40.425047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:10:40.432182 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:10:40.433452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:10:40.434254 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:10:40.435480 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:10:40.436416 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:10:40.437616 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:10:40.438793 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:10:40.442003 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:10:40.443309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:10:40.443470 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:10:40.444696 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:10:40.446567 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:10:40.455851 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:10:40.460359 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:10:40.462365 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:10:40.463626 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:10:40.464480 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:10:40.465164 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:10:40.465978 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:10:40.466014 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:10:40.466888 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:10:40.469592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:10:40.471045 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:10:40.471837 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:10:40.476550 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:10:40.478841 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:10:40.480552 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:10:40.487487 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:10:40.489203 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:10:40.491660 jq[1409]: false Aug 13 00:10:40.493521 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:10:40.496908 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:10:40.502418 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:10:40.502878 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:10:40.503512 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:10:40.505296 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:10:40.507626 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:10:40.511909 dbus-daemon[1408]: [system] SELinux support is enabled Aug 13 00:10:40.513780 jq[1424]: true Aug 13 00:10:40.513797 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:10:40.528770 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:10:40.528931 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:10:40.529190 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:10:40.529330 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:10:40.532779 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:10:40.532958 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:10:40.545674 (ntainerd)[1430]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:10:40.547298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:10:40.547945 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:10:40.550552 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:10:40.550576 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:10:40.553526 jq[1429]: true Aug 13 00:10:40.560924 extend-filesystems[1410]: Found loop3 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found loop4 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found loop5 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found vda Aug 13 00:10:40.561771 extend-filesystems[1410]: Found vda1 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found vda2 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found vda3 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found usr Aug 13 00:10:40.561771 extend-filesystems[1410]: Found vda4 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found vda6 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found vda7 Aug 13 00:10:40.561771 extend-filesystems[1410]: Found vda9 Aug 13 00:10:40.561771 extend-filesystems[1410]: Checking size of /dev/vda9 Aug 13 00:10:40.588522 extend-filesystems[1410]: Resized partition /dev/vda9 Aug 13 00:10:40.589242 tar[1428]: linux-arm64/LICENSE Aug 13 00:10:40.589242 tar[1428]: linux-arm64/helm Aug 13 00:10:40.589501 update_engine[1422]: I20250813 00:10:40.564554 1422 main.cc:92] Flatcar Update Engine starting Aug 13 00:10:40.589501 update_engine[1422]: I20250813 00:10:40.570693 1422 update_check_scheduler.cc:74] Next update check in 4m57s Aug 13 00:10:40.564600 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:10:40.590036 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:10:40.567516 systemd-logind[1418]: New seat seat0. Aug 13 00:10:40.569869 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:10:40.572844 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:10:40.578649 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:10:40.594407 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:10:40.618030 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1353) Aug 13 00:10:40.633367 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:10:40.657471 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:10:40.657471 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:10:40.657471 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:10:40.667391 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Aug 13 00:10:40.659726 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:10:40.659902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:10:40.687149 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:10:40.689449 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:10:40.691882 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:10:40.706334 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:10:40.779951 containerd[1430]: time="2025-08-13T00:10:40.779806800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:10:40.804087 containerd[1430]: time="2025-08-13T00:10:40.804035040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.805602000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.805641840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.805660880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.805847400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.805866520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.805920040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.805933720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.806092520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.806108560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.806122800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806725 containerd[1430]: time="2025-08-13T00:10:40.806132880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806984 containerd[1430]: time="2025-08-13T00:10:40.806212560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806984 containerd[1430]: time="2025-08-13T00:10:40.806420840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806984 containerd[1430]: time="2025-08-13T00:10:40.806516000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:10:40.806984 containerd[1430]: time="2025-08-13T00:10:40.806529080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:10:40.806984 containerd[1430]: time="2025-08-13T00:10:40.806604600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:10:40.806984 containerd[1430]: time="2025-08-13T00:10:40.806651440Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:10:40.811720 containerd[1430]: time="2025-08-13T00:10:40.811670680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:10:40.811930 containerd[1430]: time="2025-08-13T00:10:40.811909080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:10:40.812072 containerd[1430]: time="2025-08-13T00:10:40.812054240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:10:40.812137 containerd[1430]: time="2025-08-13T00:10:40.812124240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:10:40.812251 containerd[1430]: time="2025-08-13T00:10:40.812234880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:10:40.812582 containerd[1430]: time="2025-08-13T00:10:40.812492360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:10:40.813115 containerd[1430]: time="2025-08-13T00:10:40.813032080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:10:40.813448 containerd[1430]: time="2025-08-13T00:10:40.813425840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:10:40.813590 containerd[1430]: time="2025-08-13T00:10:40.813513240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:10:40.813665 containerd[1430]: time="2025-08-13T00:10:40.813651160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:10:40.813734 containerd[1430]: time="2025-08-13T00:10:40.813719960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:10:40.813866 containerd[1430]: time="2025-08-13T00:10:40.813848280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:10:40.813928 containerd[1430]: time="2025-08-13T00:10:40.813915160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:10:40.813984 containerd[1430]: time="2025-08-13T00:10:40.813972120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:10:40.814105 containerd[1430]: time="2025-08-13T00:10:40.814087240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:10:40.814177 containerd[1430]: time="2025-08-13T00:10:40.814163400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:10:40.814230 containerd[1430]: time="2025-08-13T00:10:40.814218880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:10:40.814429 containerd[1430]: time="2025-08-13T00:10:40.814340960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:10:40.814509 containerd[1430]: time="2025-08-13T00:10:40.814495040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.814568 containerd[1430]: time="2025-08-13T00:10:40.814555360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.814724 containerd[1430]: time="2025-08-13T00:10:40.814693320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814778520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814798120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814812720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814824840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814838840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814852000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814875920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814890800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814902800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814917080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814943160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814969320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814984080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815112 containerd[1430]: time="2025-08-13T00:10:40.814995120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:10:40.815745 containerd[1430]: time="2025-08-13T00:10:40.815544160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:10:40.815987 containerd[1430]: time="2025-08-13T00:10:40.815577480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:10:40.815987 containerd[1430]: time="2025-08-13T00:10:40.815863960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:10:40.815987 containerd[1430]: time="2025-08-13T00:10:40.815884560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:10:40.815987 containerd[1430]: time="2025-08-13T00:10:40.815896920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.815987 containerd[1430]: time="2025-08-13T00:10:40.815911920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:10:40.815987 containerd[1430]: time="2025-08-13T00:10:40.815922920Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:10:40.817291 containerd[1430]: time="2025-08-13T00:10:40.816278760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:10:40.817368 containerd[1430]: time="2025-08-13T00:10:40.816674320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:10:40.817368 containerd[1430]: time="2025-08-13T00:10:40.816745400Z" level=info msg="Connect containerd service" Aug 13 00:10:40.817368 containerd[1430]: time="2025-08-13T00:10:40.816781840Z" level=info msg="using legacy CRI server" Aug 13 00:10:40.817368 containerd[1430]: time="2025-08-13T00:10:40.816788680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:10:40.817368 containerd[1430]: time="2025-08-13T00:10:40.816871680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:10:40.818462 containerd[1430]: time="2025-08-13T00:10:40.818435360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:10:40.818864 containerd[1430]: time="2025-08-13T00:10:40.818818760Z" level=info msg="Start subscribing containerd event" Aug 13 00:10:40.819811 containerd[1430]: time="2025-08-13T00:10:40.819737000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:10:40.819811 containerd[1430]: time="2025-08-13T00:10:40.819799920Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:10:40.820406 containerd[1430]: time="2025-08-13T00:10:40.820358800Z" level=info msg="Start recovering state" Aug 13 00:10:40.820717 containerd[1430]: time="2025-08-13T00:10:40.820675440Z" level=info msg="Start event monitor" Aug 13 00:10:40.820859 containerd[1430]: time="2025-08-13T00:10:40.820837200Z" level=info msg="Start snapshots syncer" Aug 13 00:10:40.820968 containerd[1430]: time="2025-08-13T00:10:40.820952960Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:10:40.821106 containerd[1430]: time="2025-08-13T00:10:40.821088800Z" level=info msg="Start streaming server" Aug 13 00:10:40.821556 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:10:40.822911 containerd[1430]: time="2025-08-13T00:10:40.822747240Z" level=info msg="containerd successfully booted in 0.045329s" Aug 13 00:10:40.996695 tar[1428]: linux-arm64/README.md Aug 13 00:10:41.014534 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:10:41.230607 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:10:41.251888 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:10:41.265672 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:10:41.271640 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:10:41.272473 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:10:41.275071 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:10:41.290347 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:10:41.304709 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:10:41.306929 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:10:41.308057 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:10:41.711987 systemd-networkd[1374]: eth0: Gained IPv6LL Aug 13 00:10:41.714489 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:10:41.716063 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:10:41.729728 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 00:10:41.732290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:10:41.734407 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:10:41.751194 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 00:10:41.751467 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 00:10:41.752835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:10:41.755348 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:10:42.354079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:10:42.355448 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:10:42.357021 systemd[1]: Startup finished in 647ms (kernel) + 5.061s (initrd) + 3.609s (userspace) = 9.318s. Aug 13 00:10:42.358272 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:10:42.856193 kubelet[1522]: E0813 00:10:42.856094 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:10:42.858841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:10:42.858989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:10:46.353350 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:10:46.354817 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:60106.service - OpenSSH per-connection server daemon (10.0.0.1:60106). Aug 13 00:10:46.416483 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 60106 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:10:46.420598 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:46.434919 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:10:46.450795 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:10:46.453881 systemd-logind[1418]: New session 1 of user core. Aug 13 00:10:46.463591 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:10:46.487827 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:10:46.491075 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:10:46.573520 systemd[1539]: Queued start job for default target default.target. Aug 13 00:10:46.582448 systemd[1539]: Created slice app.slice - User Application Slice. Aug 13 00:10:46.582482 systemd[1539]: Reached target paths.target - Paths. Aug 13 00:10:46.582495 systemd[1539]: Reached target timers.target - Timers. Aug 13 00:10:46.583897 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:10:46.595591 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:10:46.595737 systemd[1539]: Reached target sockets.target - Sockets. Aug 13 00:10:46.595757 systemd[1539]: Reached target basic.target - Basic System. Aug 13 00:10:46.595803 systemd[1539]: Reached target default.target - Main User Target. Aug 13 00:10:46.595852 systemd[1539]: Startup finished in 97ms. Aug 13 00:10:46.596080 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:10:46.598264 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:10:46.672347 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:60110.service - OpenSSH per-connection server daemon (10.0.0.1:60110). Aug 13 00:10:46.708480 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 60110 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:10:46.710250 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:46.715395 systemd-logind[1418]: New session 2 of user core. Aug 13 00:10:46.732610 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:10:46.786280 sshd[1550]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:46.802334 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:60110.service: Deactivated successfully. Aug 13 00:10:46.803924 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:10:46.805394 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:10:46.817058 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:60124.service - OpenSSH per-connection server daemon (10.0.0.1:60124). Aug 13 00:10:46.818096 systemd-logind[1418]: Removed session 2. Aug 13 00:10:46.855128 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 60124 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:10:46.857036 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:46.862083 systemd-logind[1418]: New session 3 of user core. Aug 13 00:10:46.872578 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:10:46.923799 sshd[1557]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:46.933058 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:60124.service: Deactivated successfully. Aug 13 00:10:46.934711 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:10:46.936058 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:10:46.937652 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:60140.service - OpenSSH per-connection server daemon (10.0.0.1:60140). Aug 13 00:10:46.941097 systemd-logind[1418]: Removed session 3. Aug 13 00:10:46.988167 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 60140 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:10:46.989988 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:46.999818 systemd-logind[1418]: New session 4 of user core. Aug 13 00:10:47.007599 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:10:47.069051 sshd[1564]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:47.090231 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:60140.service: Deactivated successfully. Aug 13 00:10:47.091820 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:10:47.093552 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:10:47.103726 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:60148.service - OpenSSH per-connection server daemon (10.0.0.1:60148). Aug 13 00:10:47.105004 systemd-logind[1418]: Removed session 4. Aug 13 00:10:47.136189 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 60148 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:10:47.137949 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:47.142997 systemd-logind[1418]: New session 5 of user core. Aug 13 00:10:47.152554 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:10:47.227854 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:10:47.228179 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:10:47.244597 sudo[1574]: pam_unix(sudo:session): session closed for user root Aug 13 00:10:47.246797 sshd[1571]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:47.256413 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:60148.service: Deactivated successfully. Aug 13 00:10:47.258442 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:10:47.259958 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:10:47.271760 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:60162.service - OpenSSH per-connection server daemon (10.0.0.1:60162). Aug 13 00:10:47.273322 systemd-logind[1418]: Removed session 5. Aug 13 00:10:47.307689 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 60162 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:10:47.309717 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:47.314566 systemd-logind[1418]: New session 6 of user core. Aug 13 00:10:47.324575 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:10:47.376838 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:10:47.377136 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:10:47.380638 sudo[1583]: pam_unix(sudo:session): session closed for user root Aug 13 00:10:47.386195 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:10:47.386518 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:10:47.403064 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:10:47.404254 auditctl[1586]: No rules Aug 13 00:10:47.404616 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:10:47.404822 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:10:47.407518 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:10:47.435317 augenrules[1604]: No rules Aug 13 00:10:47.436956 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:10:47.438062 sudo[1582]: pam_unix(sudo:session): session closed for user root Aug 13 00:10:47.440719 sshd[1579]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:47.448970 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:60162.service: Deactivated successfully. Aug 13 00:10:47.450724 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:10:47.453309 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:10:47.453744 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:60178.service - OpenSSH per-connection server daemon (10.0.0.1:60178). Aug 13 00:10:47.455071 systemd-logind[1418]: Removed session 6. Aug 13 00:10:47.490740 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 60178 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:10:47.491856 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:47.496093 systemd-logind[1418]: New session 7 of user core. Aug 13 00:10:47.505562 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:10:47.559708 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:10:47.560299 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:10:47.925734 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:10:47.925809 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:10:48.243306 dockerd[1633]: time="2025-08-13T00:10:48.243159355Z" level=info msg="Starting up" Aug 13 00:10:48.444424 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1878817611-merged.mount: Deactivated successfully. Aug 13 00:10:48.470287 dockerd[1633]: time="2025-08-13T00:10:48.470223585Z" level=info msg="Loading containers: start." Aug 13 00:10:48.604087 kernel: Initializing XFRM netlink socket Aug 13 00:10:48.682035 systemd-networkd[1374]: docker0: Link UP Aug 13 00:10:48.700870 dockerd[1633]: time="2025-08-13T00:10:48.700801288Z" level=info msg="Loading containers: done." Aug 13 00:10:48.740695 dockerd[1633]: time="2025-08-13T00:10:48.740628637Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:10:48.740903 dockerd[1633]: time="2025-08-13T00:10:48.740760032Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:10:48.740903 dockerd[1633]: time="2025-08-13T00:10:48.740887938Z" level=info msg="Daemon has completed initialization" Aug 13 00:10:48.806585 dockerd[1633]: time="2025-08-13T00:10:48.806361692Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:10:48.806655 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:10:49.298830 containerd[1430]: time="2025-08-13T00:10:49.298780824Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:10:49.441186 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2799112134-merged.mount: Deactivated successfully. Aug 13 00:10:50.082657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181624184.mount: Deactivated successfully. Aug 13 00:10:50.948658 containerd[1430]: time="2025-08-13T00:10:50.948484680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:50.949621 containerd[1430]: time="2025-08-13T00:10:50.949518698Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=27352096" Aug 13 00:10:50.950544 containerd[1430]: time="2025-08-13T00:10:50.950510049Z" level=info msg="ImageCreate event name:\"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:50.955949 containerd[1430]: time="2025-08-13T00:10:50.955893885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:50.957266 containerd[1430]: time="2025-08-13T00:10:50.957230380Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"27348894\" in 1.658386829s" Aug 13 00:10:50.957266 containerd[1430]: time="2025-08-13T00:10:50.957269287Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\"" Aug 13 00:10:50.962822 containerd[1430]: time="2025-08-13T00:10:50.962766365Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:10:52.612722 containerd[1430]: time="2025-08-13T00:10:52.612044112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:52.612722 containerd[1430]: time="2025-08-13T00:10:52.612670827Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=23537848" Aug 13 00:10:52.613446 containerd[1430]: time="2025-08-13T00:10:52.613414446Z" level=info msg="ImageCreate event name:\"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:52.616484 containerd[1430]: time="2025-08-13T00:10:52.616417806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:52.619367 containerd[1430]: time="2025-08-13T00:10:52.618414936Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"25092764\" in 1.655595662s" Aug 13 00:10:52.619367 containerd[1430]: time="2025-08-13T00:10:52.618462456Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\"" Aug 13 00:10:52.620706 containerd[1430]: time="2025-08-13T00:10:52.620675846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:10:52.920090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:10:52.934550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:10:53.043966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:10:53.047820 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:10:53.081867 kubelet[1848]: E0813 00:10:53.081810 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:10:53.085321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:10:53.085476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:10:53.783310 containerd[1430]: time="2025-08-13T00:10:53.783256814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:53.783706 containerd[1430]: time="2025-08-13T00:10:53.783682110Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=18293526" Aug 13 00:10:53.784616 containerd[1430]: time="2025-08-13T00:10:53.784590084Z" level=info msg="ImageCreate event name:\"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:53.787675 containerd[1430]: time="2025-08-13T00:10:53.787629412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:53.788967 containerd[1430]: time="2025-08-13T00:10:53.788937416Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"19848460\" in 1.168122853s" Aug 13 00:10:53.789026 containerd[1430]: time="2025-08-13T00:10:53.788973159Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\"" Aug 13 00:10:53.789556 containerd[1430]: time="2025-08-13T00:10:53.789535022Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:10:54.768259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003135069.mount: Deactivated successfully. Aug 13 00:10:55.043206 containerd[1430]: time="2025-08-13T00:10:55.043087147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:55.045810 containerd[1430]: time="2025-08-13T00:10:55.045757889Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=28199474" Aug 13 00:10:55.046705 containerd[1430]: time="2025-08-13T00:10:55.046664013Z" level=info msg="ImageCreate event name:\"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:55.048456 containerd[1430]: time="2025-08-13T00:10:55.048428309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:55.049268 containerd[1430]: time="2025-08-13T00:10:55.049207797Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"28198491\" in 1.25964132s" Aug 13 00:10:55.049331 containerd[1430]: time="2025-08-13T00:10:55.049260977Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Aug 13 00:10:55.050026 containerd[1430]: time="2025-08-13T00:10:55.049833414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:10:55.640716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760332367.mount: Deactivated successfully. Aug 13 00:10:56.611722 containerd[1430]: time="2025-08-13T00:10:56.611478312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:56.612730 containerd[1430]: time="2025-08-13T00:10:56.612463068Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Aug 13 00:10:56.613589 containerd[1430]: time="2025-08-13T00:10:56.613521509Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:56.616854 containerd[1430]: time="2025-08-13T00:10:56.616804684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:56.618396 containerd[1430]: time="2025-08-13T00:10:56.618331886Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.568465734s" Aug 13 00:10:56.618396 containerd[1430]: time="2025-08-13T00:10:56.618377552Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Aug 13 00:10:56.618983 containerd[1430]: time="2025-08-13T00:10:56.618953363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:10:57.058537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979025170.mount: Deactivated successfully. Aug 13 00:10:57.071138 containerd[1430]: time="2025-08-13T00:10:57.071083830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:57.073076 containerd[1430]: time="2025-08-13T00:10:57.073037378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 13 00:10:57.074162 containerd[1430]: time="2025-08-13T00:10:57.074113644Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:57.078082 containerd[1430]: time="2025-08-13T00:10:57.078015719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:57.078973 containerd[1430]: time="2025-08-13T00:10:57.078833142Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 459.84216ms" Aug 13 00:10:57.078973 containerd[1430]: time="2025-08-13T00:10:57.078869942Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:10:57.079495 containerd[1430]: time="2025-08-13T00:10:57.079291489Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:10:57.598911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981395498.mount: Deactivated successfully. Aug 13 00:10:59.570398 containerd[1430]: time="2025-08-13T00:10:59.570322550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:59.585546 containerd[1430]: time="2025-08-13T00:10:59.585486329Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Aug 13 00:10:59.691337 containerd[1430]: time="2025-08-13T00:10:59.691244814Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:59.700299 containerd[1430]: time="2025-08-13T00:10:59.700229986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:10:59.701656 containerd[1430]: time="2025-08-13T00:10:59.701612218Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.6222882s" Aug 13 00:10:59.701656 containerd[1430]: time="2025-08-13T00:10:59.701656566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Aug 13 00:11:03.169831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:11:03.183621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:11:03.291160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:11:03.297940 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:11:03.339283 kubelet[2011]: E0813 00:11:03.339084 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:11:03.342887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:11:03.343324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:11:05.036610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:11:05.051673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:11:05.088992 systemd[1]: Reloading requested from client PID 2026 ('systemctl') (unit session-7.scope)... Aug 13 00:11:05.089010 systemd[1]: Reloading... Aug 13 00:11:05.171456 zram_generator::config[2066]: No configuration found. Aug 13 00:11:05.290364 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:11:05.347434 systemd[1]: Reloading finished in 258 ms. Aug 13 00:11:05.399746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:11:05.401628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:11:05.405894 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:11:05.406114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:11:05.408422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:11:05.527599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:11:05.533742 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:11:05.583969 kubelet[2112]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:11:05.583969 kubelet[2112]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:11:05.583969 kubelet[2112]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:11:05.583969 kubelet[2112]: I0813 00:11:05.583930 2112 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:11:06.328560 kubelet[2112]: I0813 00:11:06.328490 2112 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:11:06.328560 kubelet[2112]: I0813 00:11:06.328545 2112 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:11:06.329386 kubelet[2112]: I0813 00:11:06.329256 2112 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:11:06.360317 kubelet[2112]: E0813 00:11:06.360271 2112 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:11:06.361072 kubelet[2112]: I0813 00:11:06.361053 2112 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:11:06.372713 kubelet[2112]: E0813 00:11:06.372652 2112 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:11:06.372713 kubelet[2112]: I0813 00:11:06.372705 2112 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:11:06.375645 kubelet[2112]: I0813 00:11:06.375625 2112 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:11:06.377775 kubelet[2112]: I0813 00:11:06.377718 2112 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:11:06.377947 kubelet[2112]: I0813 00:11:06.377775 2112 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:11:06.378158 kubelet[2112]: I0813 00:11:06.378137 2112 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:11:06.378158 kubelet[2112]: I0813 00:11:06.378150 2112 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:11:06.378603 kubelet[2112]: I0813 00:11:06.378583 2112 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:11:06.384743 kubelet[2112]: I0813 00:11:06.384708 2112 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:11:06.384743 kubelet[2112]: I0813 00:11:06.384743 2112 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:11:06.384848 kubelet[2112]: I0813 00:11:06.384778 2112 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:11:06.387290 kubelet[2112]: I0813 00:11:06.387100 2112 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:11:06.390513 kubelet[2112]: I0813 00:11:06.390482 2112 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:11:06.391001 kubelet[2112]: E0813 00:11:06.390867 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:11:06.392360 kubelet[2112]: I0813 00:11:06.391262 2112 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:11:06.392360 kubelet[2112]: E0813 00:11:06.391266 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:11:06.392360 kubelet[2112]: W0813 00:11:06.391415 2112 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:11:06.394089 kubelet[2112]: I0813 00:11:06.394064 2112 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:11:06.394145 kubelet[2112]: I0813 00:11:06.394139 2112 server.go:1289] "Started kubelet" Aug 13 00:11:06.399652 kubelet[2112]: I0813 00:11:06.399562 2112 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:11:06.399748 kubelet[2112]: I0813 00:11:06.399711 2112 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:11:06.403030 kubelet[2112]: I0813 00:11:06.402736 2112 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:11:06.404126 kubelet[2112]: I0813 00:11:06.404099 2112 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:11:06.404266 kubelet[2112]: I0813 00:11:06.404242 2112 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:11:06.404800 kubelet[2112]: I0813 00:11:06.404778 2112 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:11:06.405602 kubelet[2112]: E0813 00:11:06.405578 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:11:06.405661 kubelet[2112]: I0813 00:11:06.405610 2112 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:11:06.406222 kubelet[2112]: E0813 00:11:06.404488 2112 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2b190da5c469 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:11:06.394084457 +0000 UTC m=+0.855928254,LastTimestamp:2025-08-13 00:11:06.394084457 +0000 UTC m=+0.855928254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:11:06.406222 kubelet[2112]: I0813 00:11:06.405780 2112 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:11:06.406222 kubelet[2112]: I0813 00:11:06.405845 2112 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:11:06.406222 kubelet[2112]: E0813 00:11:06.406000 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Aug 13 00:11:06.406222 kubelet[2112]: E0813 00:11:06.406149 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:11:06.406827 kubelet[2112]: I0813 00:11:06.406797 2112 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:11:06.407218 kubelet[2112]: I0813 00:11:06.406900 2112 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:11:06.408047 kubelet[2112]: E0813 00:11:06.408025 2112 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:11:06.408164 kubelet[2112]: I0813 00:11:06.408085 2112 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:11:06.423924 kubelet[2112]: I0813 00:11:06.423885 2112 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:11:06.423924 kubelet[2112]: I0813 00:11:06.423904 2112 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:11:06.423924 kubelet[2112]: I0813 00:11:06.423924 2112 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:11:06.429740 kubelet[2112]: I0813 00:11:06.429690 2112 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:11:06.431083 kubelet[2112]: I0813 00:11:06.430939 2112 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:11:06.431083 kubelet[2112]: I0813 00:11:06.430967 2112 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:11:06.431083 kubelet[2112]: I0813 00:11:06.430988 2112 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:11:06.431083 kubelet[2112]: I0813 00:11:06.430994 2112 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:11:06.431083 kubelet[2112]: E0813 00:11:06.431041 2112 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:11:06.492124 kubelet[2112]: E0813 00:11:06.492086 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:11:06.492397 kubelet[2112]: I0813 00:11:06.492366 2112 policy_none.go:49] "None policy: Start" Aug 13 00:11:06.492453 kubelet[2112]: I0813 00:11:06.492401 2112 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:11:06.492453 kubelet[2112]: I0813 00:11:06.492414 2112 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:11:06.497418 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:11:06.506363 kubelet[2112]: E0813 00:11:06.506323 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:11:06.512413 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:11:06.528005 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:11:06.529481 kubelet[2112]: E0813 00:11:06.529262 2112 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:11:06.529589 kubelet[2112]: I0813 00:11:06.529501 2112 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:11:06.529589 kubelet[2112]: I0813 00:11:06.529521 2112 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:11:06.530148 kubelet[2112]: I0813 00:11:06.529954 2112 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:11:06.530970 kubelet[2112]: E0813 00:11:06.530949 2112 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:11:06.531674 kubelet[2112]: E0813 00:11:06.530999 2112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:11:06.542904 systemd[1]: Created slice kubepods-burstable-pod75813eb2d2be36e4d4fe43db3ec64b8b.slice - libcontainer container kubepods-burstable-pod75813eb2d2be36e4d4fe43db3ec64b8b.slice. Aug 13 00:11:06.554128 kubelet[2112]: E0813 00:11:06.554061 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:11:06.557157 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice - libcontainer container kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Aug 13 00:11:06.559175 kubelet[2112]: E0813 00:11:06.559140 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:11:06.561180 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice - libcontainer container kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Aug 13 00:11:06.562810 kubelet[2112]: E0813 00:11:06.562771 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:11:06.607102 kubelet[2112]: I0813 00:11:06.606982 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75813eb2d2be36e4d4fe43db3ec64b8b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"75813eb2d2be36e4d4fe43db3ec64b8b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:06.607102 kubelet[2112]: I0813 00:11:06.607022 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75813eb2d2be36e4d4fe43db3ec64b8b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"75813eb2d2be36e4d4fe43db3ec64b8b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:06.607102 kubelet[2112]: I0813 00:11:06.607041 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75813eb2d2be36e4d4fe43db3ec64b8b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"75813eb2d2be36e4d4fe43db3ec64b8b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:06.607102 kubelet[2112]: I0813 00:11:06.607057 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:06.607102 kubelet[2112]: I0813 00:11:06.607071 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:06.607520 kubelet[2112]: I0813 00:11:06.607093 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:06.607520 kubelet[2112]: I0813 00:11:06.607116 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:06.607520 kubelet[2112]: I0813 00:11:06.607130 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:06.607520 kubelet[2112]: I0813 00:11:06.607149 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:06.607770 kubelet[2112]: E0813 00:11:06.607719 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Aug 13 00:11:06.631225 kubelet[2112]: I0813 00:11:06.631203 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:11:06.631893 kubelet[2112]: E0813 00:11:06.631841 2112 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Aug 13 00:11:06.833844 kubelet[2112]: I0813 00:11:06.833811 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:11:06.834208 kubelet[2112]: E0813 00:11:06.834181 2112 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Aug 13 00:11:06.855561 kubelet[2112]: E0813 00:11:06.855491 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:06.856098 containerd[1430]: time="2025-08-13T00:11:06.856060920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:75813eb2d2be36e4d4fe43db3ec64b8b,Namespace:kube-system,Attempt:0,}" Aug 13 00:11:06.860404 kubelet[2112]: E0813 00:11:06.860291 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:06.860818 containerd[1430]: time="2025-08-13T00:11:06.860703550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Aug 13 00:11:06.864051 kubelet[2112]: E0813 00:11:06.864012 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:06.864477 containerd[1430]: time="2025-08-13T00:11:06.864438552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Aug 13 00:11:07.009044 kubelet[2112]: E0813 00:11:07.009008 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Aug 13 00:11:07.236242 kubelet[2112]: I0813 00:11:07.236207 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:11:07.236590 kubelet[2112]: E0813 00:11:07.236565 2112 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Aug 13 00:11:07.308455 kubelet[2112]: E0813 00:11:07.308404 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:11:07.431956 kubelet[2112]: E0813 00:11:07.431906 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:11:07.447897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638190448.mount: Deactivated successfully. Aug 13 00:11:07.453436 containerd[1430]: time="2025-08-13T00:11:07.452561847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:11:07.454400 containerd[1430]: time="2025-08-13T00:11:07.454370236Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 13 00:11:07.456532 containerd[1430]: time="2025-08-13T00:11:07.456498591Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:11:07.457188 containerd[1430]: time="2025-08-13T00:11:07.457078494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:11:07.458830 containerd[1430]: time="2025-08-13T00:11:07.458796300Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:11:07.460823 containerd[1430]: time="2025-08-13T00:11:07.460533807Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:11:07.462134 containerd[1430]: time="2025-08-13T00:11:07.461652047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:11:07.463502 containerd[1430]: time="2025-08-13T00:11:07.463451185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:11:07.465650 containerd[1430]: time="2025-08-13T00:11:07.465612978Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 601.111986ms" Aug 13 00:11:07.466266 containerd[1430]: time="2025-08-13T00:11:07.466237173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 610.092905ms" Aug 13 00:11:07.467553 kubelet[2112]: E0813 00:11:07.467489 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:11:07.469001 containerd[1430]: time="2025-08-13T00:11:07.468949235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 608.174234ms" Aug 13 00:11:07.665966 containerd[1430]: time="2025-08-13T00:11:07.660792519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:07.665966 containerd[1430]: time="2025-08-13T00:11:07.661556673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:07.665966 containerd[1430]: time="2025-08-13T00:11:07.661570729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:07.665966 containerd[1430]: time="2025-08-13T00:11:07.661658750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:07.667574 containerd[1430]: time="2025-08-13T00:11:07.667005948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:07.667574 containerd[1430]: time="2025-08-13T00:11:07.667052801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:07.667574 containerd[1430]: time="2025-08-13T00:11:07.667082996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:07.667574 containerd[1430]: time="2025-08-13T00:11:07.667245782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:07.667818 containerd[1430]: time="2025-08-13T00:11:07.666801714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:07.667818 containerd[1430]: time="2025-08-13T00:11:07.666894460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:07.667818 containerd[1430]: time="2025-08-13T00:11:07.666916886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:07.667818 containerd[1430]: time="2025-08-13T00:11:07.667049477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:07.688549 systemd[1]: Started cri-containerd-cc41cf206d055e157aa9230328700b66a52466d972a8f0854c31634c17bac331.scope - libcontainer container cc41cf206d055e157aa9230328700b66a52466d972a8f0854c31634c17bac331. Aug 13 00:11:07.693004 systemd[1]: Started cri-containerd-9717c3c56eabae75fe8476eeb768d73e351fd21ffe50bf436d4487f2a1c93ca8.scope - libcontainer container 9717c3c56eabae75fe8476eeb768d73e351fd21ffe50bf436d4487f2a1c93ca8. Aug 13 00:11:07.694644 systemd[1]: Started cri-containerd-a01345ff42cb743da7c0f6f9b56ac93460eaef7138e23040e27b2a75dc3db250.scope - libcontainer container a01345ff42cb743da7c0f6f9b56ac93460eaef7138e23040e27b2a75dc3db250. Aug 13 00:11:07.735869 containerd[1430]: time="2025-08-13T00:11:07.732737670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc41cf206d055e157aa9230328700b66a52466d972a8f0854c31634c17bac331\"" Aug 13 00:11:07.736004 kubelet[2112]: E0813 00:11:07.735236 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:07.736851 containerd[1430]: time="2025-08-13T00:11:07.736454803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"a01345ff42cb743da7c0f6f9b56ac93460eaef7138e23040e27b2a75dc3db250\"" Aug 13 00:11:07.737964 kubelet[2112]: E0813 00:11:07.737925 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:07.740019 containerd[1430]: time="2025-08-13T00:11:07.739965459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:75813eb2d2be36e4d4fe43db3ec64b8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9717c3c56eabae75fe8476eeb768d73e351fd21ffe50bf436d4487f2a1c93ca8\"" Aug 13 00:11:07.740666 kubelet[2112]: E0813 00:11:07.740631 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:07.742826 containerd[1430]: time="2025-08-13T00:11:07.742789530Z" level=info msg="CreateContainer within sandbox \"cc41cf206d055e157aa9230328700b66a52466d972a8f0854c31634c17bac331\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:11:07.744033 containerd[1430]: time="2025-08-13T00:11:07.743961150Z" level=info msg="CreateContainer within sandbox \"a01345ff42cb743da7c0f6f9b56ac93460eaef7138e23040e27b2a75dc3db250\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:11:07.745637 containerd[1430]: time="2025-08-13T00:11:07.745602188Z" level=info msg="CreateContainer within sandbox \"9717c3c56eabae75fe8476eeb768d73e351fd21ffe50bf436d4487f2a1c93ca8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:11:07.760811 containerd[1430]: time="2025-08-13T00:11:07.760766978Z" level=info msg="CreateContainer within sandbox \"cc41cf206d055e157aa9230328700b66a52466d972a8f0854c31634c17bac331\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc61f5fe0d4846189e5f8e99336065bfbc6212094d86cdd160d836fae3afe91c\"" Aug 13 00:11:07.761661 containerd[1430]: time="2025-08-13T00:11:07.761634650Z" level=info msg="StartContainer for \"fc61f5fe0d4846189e5f8e99336065bfbc6212094d86cdd160d836fae3afe91c\"" Aug 13 00:11:07.761950 containerd[1430]: time="2025-08-13T00:11:07.761863472Z" level=info msg="CreateContainer within sandbox \"a01345ff42cb743da7c0f6f9b56ac93460eaef7138e23040e27b2a75dc3db250\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"39c8525f662b58453b60f0aa3715623e9d8da60d476a04b80afdd75f5c6eaa62\"" Aug 13 00:11:07.762377 containerd[1430]: time="2025-08-13T00:11:07.762340938Z" level=info msg="StartContainer for \"39c8525f662b58453b60f0aa3715623e9d8da60d476a04b80afdd75f5c6eaa62\"" Aug 13 00:11:07.762666 containerd[1430]: time="2025-08-13T00:11:07.762636797Z" level=info msg="CreateContainer within sandbox \"9717c3c56eabae75fe8476eeb768d73e351fd21ffe50bf436d4487f2a1c93ca8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"060ebb1eb1ffaf8189f768a8f5ef455d7cb6d4e7827f3f9d1c34cc072d9c7216\"" Aug 13 00:11:07.763316 containerd[1430]: time="2025-08-13T00:11:07.763108577Z" level=info msg="StartContainer for \"060ebb1eb1ffaf8189f768a8f5ef455d7cb6d4e7827f3f9d1c34cc072d9c7216\"" Aug 13 00:11:07.790540 systemd[1]: Started cri-containerd-39c8525f662b58453b60f0aa3715623e9d8da60d476a04b80afdd75f5c6eaa62.scope - libcontainer container 39c8525f662b58453b60f0aa3715623e9d8da60d476a04b80afdd75f5c6eaa62. Aug 13 00:11:07.794650 systemd[1]: Started cri-containerd-060ebb1eb1ffaf8189f768a8f5ef455d7cb6d4e7827f3f9d1c34cc072d9c7216.scope - libcontainer container 060ebb1eb1ffaf8189f768a8f5ef455d7cb6d4e7827f3f9d1c34cc072d9c7216. Aug 13 00:11:07.795575 systemd[1]: Started cri-containerd-fc61f5fe0d4846189e5f8e99336065bfbc6212094d86cdd160d836fae3afe91c.scope - libcontainer container fc61f5fe0d4846189e5f8e99336065bfbc6212094d86cdd160d836fae3afe91c. Aug 13 00:11:07.810443 kubelet[2112]: E0813 00:11:07.810401 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Aug 13 00:11:07.864276 containerd[1430]: time="2025-08-13T00:11:07.864125227Z" level=info msg="StartContainer for \"060ebb1eb1ffaf8189f768a8f5ef455d7cb6d4e7827f3f9d1c34cc072d9c7216\" returns successfully" Aug 13 00:11:07.871490 containerd[1430]: time="2025-08-13T00:11:07.871171048Z" level=info msg="StartContainer for \"39c8525f662b58453b60f0aa3715623e9d8da60d476a04b80afdd75f5c6eaa62\" returns successfully" Aug 13 00:11:07.871490 containerd[1430]: time="2025-08-13T00:11:07.871318257Z" level=info msg="StartContainer for \"fc61f5fe0d4846189e5f8e99336065bfbc6212094d86cdd160d836fae3afe91c\" returns successfully" Aug 13 00:11:07.964007 kubelet[2112]: E0813 00:11:07.963952 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:11:08.038015 kubelet[2112]: I0813 00:11:08.037976 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:11:08.038327 kubelet[2112]: E0813 00:11:08.038288 2112 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Aug 13 00:11:08.436507 kubelet[2112]: E0813 00:11:08.436474 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:11:08.436611 kubelet[2112]: E0813 00:11:08.436602 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:08.438867 kubelet[2112]: E0813 00:11:08.438841 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:11:08.438958 kubelet[2112]: E0813 00:11:08.438942 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:08.440407 kubelet[2112]: E0813 00:11:08.440381 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:11:08.440518 kubelet[2112]: E0813 00:11:08.440501 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:09.441892 kubelet[2112]: E0813 00:11:09.441857 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:11:09.442243 kubelet[2112]: E0813 00:11:09.441975 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:09.442279 kubelet[2112]: E0813 00:11:09.442239 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:11:09.442357 kubelet[2112]: E0813 00:11:09.442329 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:09.640177 kubelet[2112]: I0813 00:11:09.640126 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:11:09.749464 kubelet[2112]: E0813 00:11:09.748533 2112 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:11:09.838236 kubelet[2112]: I0813 00:11:09.838187 2112 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:11:09.838236 kubelet[2112]: E0813 00:11:09.838234 2112 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:11:09.847058 kubelet[2112]: E0813 00:11:09.847026 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:11:09.947864 kubelet[2112]: E0813 00:11:09.947816 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:11:10.048059 kubelet[2112]: E0813 00:11:10.047932 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:11:10.106531 kubelet[2112]: I0813 00:11:10.106470 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:10.112196 kubelet[2112]: E0813 00:11:10.112145 2112 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:10.112196 kubelet[2112]: I0813 00:11:10.112186 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:10.114603 kubelet[2112]: E0813 00:11:10.114332 2112 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:10.114603 kubelet[2112]: I0813 00:11:10.114448 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:10.116437 kubelet[2112]: E0813 00:11:10.116408 2112 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:10.390468 kubelet[2112]: I0813 00:11:10.390339 2112 apiserver.go:52] "Watching apiserver" Aug 13 00:11:10.406407 kubelet[2112]: I0813 00:11:10.406365 2112 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:11:10.861374 kubelet[2112]: I0813 00:11:10.861327 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:10.869006 kubelet[2112]: E0813 00:11:10.868976 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:10.967640 kubelet[2112]: I0813 00:11:10.967582 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:10.971051 kubelet[2112]: E0813 00:11:10.971027 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:11.444000 kubelet[2112]: E0813 00:11:11.443864 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:11.444000 kubelet[2112]: E0813 00:11:11.443964 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:11.859673 kubelet[2112]: I0813 00:11:11.859561 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:11.864196 kubelet[2112]: E0813 00:11:11.864160 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:12.072838 systemd[1]: Reloading requested from client PID 2403 ('systemctl') (unit session-7.scope)... Aug 13 00:11:12.072853 systemd[1]: Reloading... Aug 13 00:11:12.135472 zram_generator::config[2445]: No configuration found. Aug 13 00:11:12.217673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:11:12.284943 systemd[1]: Reloading finished in 211 ms. Aug 13 00:11:12.318856 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:11:12.334453 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:11:12.334705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:11:12.334768 systemd[1]: kubelet.service: Consumed 1.286s CPU time, 127.1M memory peak, 0B memory swap peak. Aug 13 00:11:12.348718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:11:12.452721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:11:12.457358 (kubelet)[2484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:11:12.490631 kubelet[2484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:11:12.492361 kubelet[2484]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:11:12.492361 kubelet[2484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:11:12.492361 kubelet[2484]: I0813 00:11:12.491002 2484 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:11:12.498270 kubelet[2484]: I0813 00:11:12.498235 2484 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:11:12.498270 kubelet[2484]: I0813 00:11:12.498267 2484 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:11:12.498513 kubelet[2484]: I0813 00:11:12.498498 2484 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:11:12.499754 kubelet[2484]: I0813 00:11:12.499736 2484 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:11:12.502007 kubelet[2484]: I0813 00:11:12.501860 2484 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:11:12.507041 kubelet[2484]: E0813 00:11:12.507011 2484 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:11:12.507041 kubelet[2484]: I0813 00:11:12.507038 2484 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:11:12.513537 kubelet[2484]: I0813 00:11:12.513478 2484 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:11:12.513866 kubelet[2484]: I0813 00:11:12.513818 2484 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:11:12.514029 kubelet[2484]: I0813 00:11:12.513856 2484 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:11:12.514118 kubelet[2484]: I0813 00:11:12.514031 2484 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:11:12.514118 kubelet[2484]: I0813 00:11:12.514041 2484 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:11:12.514118 kubelet[2484]: I0813 00:11:12.514088 2484 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:11:12.514269 kubelet[2484]: I0813 00:11:12.514255 2484 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:11:12.514298 kubelet[2484]: I0813 00:11:12.514271 2484 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:11:12.514319 kubelet[2484]: I0813 00:11:12.514299 2484 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:11:12.515145 kubelet[2484]: I0813 00:11:12.515053 2484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:11:12.516767 kubelet[2484]: I0813 00:11:12.516739 2484 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:11:12.519401 kubelet[2484]: I0813 00:11:12.517556 2484 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:11:12.519869 kubelet[2484]: I0813 00:11:12.519844 2484 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:11:12.519936 kubelet[2484]: I0813 00:11:12.519882 2484 server.go:1289] "Started kubelet" Aug 13 00:11:12.520164 kubelet[2484]: I0813 00:11:12.520129 2484 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:11:12.521300 kubelet[2484]: I0813 00:11:12.521276 2484 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:11:12.523522 kubelet[2484]: I0813 00:11:12.523494 2484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:11:12.523712 kubelet[2484]: I0813 00:11:12.520162 2484 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:11:12.523948 kubelet[2484]: I0813 00:11:12.523928 2484 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:11:12.524540 kubelet[2484]: I0813 00:11:12.524509 2484 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:11:12.524704 kubelet[2484]: I0813 00:11:12.524676 2484 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:11:12.525034 kubelet[2484]: E0813 00:11:12.524745 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:11:12.525272 kubelet[2484]: I0813 00:11:12.525257 2484 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:11:12.525627 kubelet[2484]: I0813 00:11:12.525607 2484 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:11:12.532803 kubelet[2484]: I0813 00:11:12.532771 2484 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:11:12.532936 kubelet[2484]: I0813 00:11:12.532910 2484 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:11:12.535147 kubelet[2484]: I0813 00:11:12.535122 2484 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:11:12.535538 kubelet[2484]: E0813 00:11:12.535514 2484 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:11:12.551430 kubelet[2484]: I0813 00:11:12.551393 2484 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:11:12.553278 kubelet[2484]: I0813 00:11:12.553239 2484 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:11:12.553278 kubelet[2484]: I0813 00:11:12.553277 2484 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:11:12.553441 kubelet[2484]: I0813 00:11:12.553298 2484 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:11:12.553441 kubelet[2484]: I0813 00:11:12.553318 2484 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:11:12.553441 kubelet[2484]: E0813 00:11:12.553384 2484 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:11:12.569434 kubelet[2484]: I0813 00:11:12.569405 2484 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:11:12.569434 kubelet[2484]: I0813 00:11:12.569423 2484 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:11:12.569434 kubelet[2484]: I0813 00:11:12.569442 2484 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:11:12.569637 kubelet[2484]: I0813 00:11:12.569602 2484 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:11:12.569637 kubelet[2484]: I0813 00:11:12.569625 2484 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:11:12.569699 kubelet[2484]: I0813 00:11:12.569644 2484 policy_none.go:49] "None policy: Start" Aug 13 00:11:12.569699 kubelet[2484]: I0813 00:11:12.569653 2484 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:11:12.569699 kubelet[2484]: I0813 00:11:12.569663 2484 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:11:12.569768 kubelet[2484]: I0813 00:11:12.569753 2484 state_mem.go:75] "Updated machine memory state" Aug 13 00:11:12.573124 kubelet[2484]: E0813 00:11:12.573101 2484 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:11:12.573400 kubelet[2484]: I0813 00:11:12.573252 2484 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:11:12.573400 kubelet[2484]: I0813 00:11:12.573270 2484 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:11:12.573709 kubelet[2484]: I0813 00:11:12.573484 2484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:11:12.574983 kubelet[2484]: E0813 00:11:12.574205 2484 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:11:12.655057 kubelet[2484]: I0813 00:11:12.655015 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:12.655197 kubelet[2484]: I0813 00:11:12.655097 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:12.655197 kubelet[2484]: I0813 00:11:12.655031 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:12.660398 kubelet[2484]: E0813 00:11:12.660369 2484 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:12.661128 kubelet[2484]: E0813 00:11:12.661093 2484 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:12.661195 kubelet[2484]: E0813 00:11:12.661171 2484 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:12.678181 kubelet[2484]: I0813 00:11:12.678114 2484 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:11:12.684231 kubelet[2484]: I0813 00:11:12.683652 2484 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 00:11:12.684231 kubelet[2484]: I0813 00:11:12.683731 2484 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:11:12.728573 kubelet[2484]: I0813 00:11:12.728536 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75813eb2d2be36e4d4fe43db3ec64b8b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"75813eb2d2be36e4d4fe43db3ec64b8b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:12.728714 kubelet[2484]: I0813 00:11:12.728574 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:12.728714 kubelet[2484]: I0813 00:11:12.728624 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75813eb2d2be36e4d4fe43db3ec64b8b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"75813eb2d2be36e4d4fe43db3ec64b8b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:12.728714 kubelet[2484]: I0813 00:11:12.728640 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:12.728714 kubelet[2484]: I0813 00:11:12.728655 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:12.728714 kubelet[2484]: I0813 00:11:12.728705 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:12.728822 kubelet[2484]: I0813 00:11:12.728721 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:12.728822 kubelet[2484]: I0813 00:11:12.728738 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:11:12.728822 kubelet[2484]: I0813 00:11:12.728771 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75813eb2d2be36e4d4fe43db3ec64b8b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"75813eb2d2be36e4d4fe43db3ec64b8b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:12.960733 kubelet[2484]: E0813 00:11:12.960692 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:12.961981 kubelet[2484]: E0813 00:11:12.961908 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:12.961981 kubelet[2484]: E0813 00:11:12.961948 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:13.515842 kubelet[2484]: I0813 00:11:13.515807 2484 apiserver.go:52] "Watching apiserver" Aug 13 00:11:13.526729 kubelet[2484]: I0813 00:11:13.526634 2484 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:11:13.563610 kubelet[2484]: I0813 00:11:13.563574 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:13.563950 kubelet[2484]: I0813 00:11:13.563931 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:13.563999 kubelet[2484]: E0813 00:11:13.563968 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:13.571721 kubelet[2484]: E0813 00:11:13.571688 2484 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:11:13.571829 kubelet[2484]: E0813 00:11:13.571688 2484 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:11:13.571866 kubelet[2484]: E0813 00:11:13.571848 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:13.571942 kubelet[2484]: E0813 00:11:13.571924 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:13.585723 kubelet[2484]: I0813 00:11:13.585663 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.585648708 podStartE2EDuration="2.585648708s" podCreationTimestamp="2025-08-13 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:11:13.584946589 +0000 UTC m=+1.123711639" watchObservedRunningTime="2025-08-13 00:11:13.585648708 +0000 UTC m=+1.124413758" Aug 13 00:11:13.594335 kubelet[2484]: I0813 00:11:13.594261 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.594245057 podStartE2EDuration="3.594245057s" podCreationTimestamp="2025-08-13 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:11:13.593484338 +0000 UTC m=+1.132249388" watchObservedRunningTime="2025-08-13 00:11:13.594245057 +0000 UTC m=+1.133010107" Aug 13 00:11:13.603340 kubelet[2484]: I0813 00:11:13.603274 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.603258011 podStartE2EDuration="3.603258011s" podCreationTimestamp="2025-08-13 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:11:13.602694146 +0000 UTC m=+1.141459196" watchObservedRunningTime="2025-08-13 00:11:13.603258011 +0000 UTC m=+1.142023061" Aug 13 00:11:14.566039 kubelet[2484]: E0813 00:11:14.564994 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:14.571454 kubelet[2484]: E0813 00:11:14.571188 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:17.511570 kubelet[2484]: E0813 00:11:17.511516 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:17.512269 kubelet[2484]: E0813 00:11:17.512242 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:17.569728 kubelet[2484]: E0813 00:11:17.569685 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:17.570350 kubelet[2484]: E0813 00:11:17.570291 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:18.210960 kubelet[2484]: I0813 00:11:18.210922 2484 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:11:18.211285 containerd[1430]: time="2025-08-13T00:11:18.211245666Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:11:18.212370 kubelet[2484]: I0813 00:11:18.211473 2484 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:11:19.206751 systemd[1]: Created slice kubepods-besteffort-pod45dc6954_bac0_4f5a_8569_6f9eb08ee043.slice - libcontainer container kubepods-besteffort-pod45dc6954_bac0_4f5a_8569_6f9eb08ee043.slice. Aug 13 00:11:19.273939 kubelet[2484]: I0813 00:11:19.273892 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45dc6954-bac0-4f5a-8569-6f9eb08ee043-xtables-lock\") pod \"kube-proxy-cqqn6\" (UID: \"45dc6954-bac0-4f5a-8569-6f9eb08ee043\") " pod="kube-system/kube-proxy-cqqn6" Aug 13 00:11:19.273939 kubelet[2484]: I0813 00:11:19.273933 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45dc6954-bac0-4f5a-8569-6f9eb08ee043-lib-modules\") pod \"kube-proxy-cqqn6\" (UID: \"45dc6954-bac0-4f5a-8569-6f9eb08ee043\") " pod="kube-system/kube-proxy-cqqn6" Aug 13 00:11:19.274319 kubelet[2484]: I0813 00:11:19.273952 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45dc6954-bac0-4f5a-8569-6f9eb08ee043-kube-proxy\") pod \"kube-proxy-cqqn6\" (UID: \"45dc6954-bac0-4f5a-8569-6f9eb08ee043\") " pod="kube-system/kube-proxy-cqqn6" Aug 13 00:11:19.274319 kubelet[2484]: I0813 00:11:19.273976 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrh5k\" (UniqueName: \"kubernetes.io/projected/45dc6954-bac0-4f5a-8569-6f9eb08ee043-kube-api-access-wrh5k\") pod \"kube-proxy-cqqn6\" (UID: \"45dc6954-bac0-4f5a-8569-6f9eb08ee043\") " pod="kube-system/kube-proxy-cqqn6" Aug 13 00:11:19.431976 systemd[1]: Created slice kubepods-besteffort-poddc1e52c0_b392_40e0_b7b0_6d6aea3d7640.slice - libcontainer container kubepods-besteffort-poddc1e52c0_b392_40e0_b7b0_6d6aea3d7640.slice. Aug 13 00:11:19.475519 kubelet[2484]: I0813 00:11:19.475384 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2gvb\" (UniqueName: \"kubernetes.io/projected/dc1e52c0-b392-40e0-b7b0-6d6aea3d7640-kube-api-access-w2gvb\") pod \"tigera-operator-747864d56d-t7dqq\" (UID: \"dc1e52c0-b392-40e0-b7b0-6d6aea3d7640\") " pod="tigera-operator/tigera-operator-747864d56d-t7dqq" Aug 13 00:11:19.475519 kubelet[2484]: I0813 00:11:19.475469 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc1e52c0-b392-40e0-b7b0-6d6aea3d7640-var-lib-calico\") pod \"tigera-operator-747864d56d-t7dqq\" (UID: \"dc1e52c0-b392-40e0-b7b0-6d6aea3d7640\") " pod="tigera-operator/tigera-operator-747864d56d-t7dqq" Aug 13 00:11:19.518002 kubelet[2484]: E0813 00:11:19.517802 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:19.518525 containerd[1430]: time="2025-08-13T00:11:19.518481631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqqn6,Uid:45dc6954-bac0-4f5a-8569-6f9eb08ee043,Namespace:kube-system,Attempt:0,}" Aug 13 00:11:19.539300 containerd[1430]: time="2025-08-13T00:11:19.538814994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:19.539300 containerd[1430]: time="2025-08-13T00:11:19.538880506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:19.539300 containerd[1430]: time="2025-08-13T00:11:19.538896674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:19.539300 containerd[1430]: time="2025-08-13T00:11:19.538971110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:19.552410 systemd[1]: run-containerd-runc-k8s.io-79fdeca4c668f14ad9ca87879ba49c6cb9378dc1577e3994235d3f88b639350c-runc.sulgyY.mount: Deactivated successfully. Aug 13 00:11:19.565568 systemd[1]: Started cri-containerd-79fdeca4c668f14ad9ca87879ba49c6cb9378dc1577e3994235d3f88b639350c.scope - libcontainer container 79fdeca4c668f14ad9ca87879ba49c6cb9378dc1577e3994235d3f88b639350c. Aug 13 00:11:19.593493 containerd[1430]: time="2025-08-13T00:11:19.593450617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqqn6,Uid:45dc6954-bac0-4f5a-8569-6f9eb08ee043,Namespace:kube-system,Attempt:0,} returns sandbox id \"79fdeca4c668f14ad9ca87879ba49c6cb9378dc1577e3994235d3f88b639350c\"" Aug 13 00:11:19.594217 kubelet[2484]: E0813 00:11:19.594195 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:19.600315 containerd[1430]: time="2025-08-13T00:11:19.599957833Z" level=info msg="CreateContainer within sandbox \"79fdeca4c668f14ad9ca87879ba49c6cb9378dc1577e3994235d3f88b639350c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:11:19.613356 containerd[1430]: time="2025-08-13T00:11:19.613304706Z" level=info msg="CreateContainer within sandbox \"79fdeca4c668f14ad9ca87879ba49c6cb9378dc1577e3994235d3f88b639350c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"52073b119768292fca7092c8dea2f17bb6588fa2cbefaa82c6c2655d3ca078cb\"" Aug 13 00:11:19.613901 containerd[1430]: time="2025-08-13T00:11:19.613867501Z" level=info msg="StartContainer for \"52073b119768292fca7092c8dea2f17bb6588fa2cbefaa82c6c2655d3ca078cb\"" Aug 13 00:11:19.640584 systemd[1]: Started cri-containerd-52073b119768292fca7092c8dea2f17bb6588fa2cbefaa82c6c2655d3ca078cb.scope - libcontainer container 52073b119768292fca7092c8dea2f17bb6588fa2cbefaa82c6c2655d3ca078cb. Aug 13 00:11:19.671391 containerd[1430]: time="2025-08-13T00:11:19.671273996Z" level=info msg="StartContainer for \"52073b119768292fca7092c8dea2f17bb6588fa2cbefaa82c6c2655d3ca078cb\" returns successfully" Aug 13 00:11:19.735676 containerd[1430]: time="2025-08-13T00:11:19.735557367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-t7dqq,Uid:dc1e52c0-b392-40e0-b7b0-6d6aea3d7640,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:11:19.760126 containerd[1430]: time="2025-08-13T00:11:19.759130631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:19.760126 containerd[1430]: time="2025-08-13T00:11:19.759190460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:19.760126 containerd[1430]: time="2025-08-13T00:11:19.759206348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:19.760126 containerd[1430]: time="2025-08-13T00:11:19.759281785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:19.779545 systemd[1]: Started cri-containerd-ab0fa8ec8ba9c7690ab13c2f3a7b7c10a17e950efb2b91fc32548713fcf60ef5.scope - libcontainer container ab0fa8ec8ba9c7690ab13c2f3a7b7c10a17e950efb2b91fc32548713fcf60ef5. Aug 13 00:11:19.811287 containerd[1430]: time="2025-08-13T00:11:19.811209486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-t7dqq,Uid:dc1e52c0-b392-40e0-b7b0-6d6aea3d7640,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ab0fa8ec8ba9c7690ab13c2f3a7b7c10a17e950efb2b91fc32548713fcf60ef5\"" Aug 13 00:11:19.813166 containerd[1430]: time="2025-08-13T00:11:19.812864774Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:11:20.581479 kubelet[2484]: E0813 00:11:20.581445 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:21.271832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366402453.mount: Deactivated successfully. Aug 13 00:11:21.410106 kubelet[2484]: E0813 00:11:21.410069 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:21.431540 kubelet[2484]: I0813 00:11:21.431484 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cqqn6" podStartSLOduration=2.431464842 podStartE2EDuration="2.431464842s" podCreationTimestamp="2025-08-13 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:11:20.592607169 +0000 UTC m=+8.131372219" watchObservedRunningTime="2025-08-13 00:11:21.431464842 +0000 UTC m=+8.970229892" Aug 13 00:11:21.581861 kubelet[2484]: E0813 00:11:21.581318 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:21.629069 containerd[1430]: time="2025-08-13T00:11:21.629020677Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:21.630074 containerd[1430]: time="2025-08-13T00:11:21.629601691Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:11:21.630333 containerd[1430]: time="2025-08-13T00:11:21.630306280Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:21.644299 containerd[1430]: time="2025-08-13T00:11:21.633115551Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:21.644425 containerd[1430]: time="2025-08-13T00:11:21.633823661Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.820920389s" Aug 13 00:11:21.644425 containerd[1430]: time="2025-08-13T00:11:21.644406658Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:11:21.649122 containerd[1430]: time="2025-08-13T00:11:21.649060657Z" level=info msg="CreateContainer within sandbox \"ab0fa8ec8ba9c7690ab13c2f3a7b7c10a17e950efb2b91fc32548713fcf60ef5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:11:21.660037 containerd[1430]: time="2025-08-13T00:11:21.659985283Z" level=info msg="CreateContainer within sandbox \"ab0fa8ec8ba9c7690ab13c2f3a7b7c10a17e950efb2b91fc32548713fcf60ef5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ca5bc0c34d24c9f1868944e36064895b9d6a3c8e2251b1c54f28ac3291b3e096\"" Aug 13 00:11:21.660808 containerd[1430]: time="2025-08-13T00:11:21.660502910Z" level=info msg="StartContainer for \"ca5bc0c34d24c9f1868944e36064895b9d6a3c8e2251b1c54f28ac3291b3e096\"" Aug 13 00:11:21.691596 systemd[1]: Started cri-containerd-ca5bc0c34d24c9f1868944e36064895b9d6a3c8e2251b1c54f28ac3291b3e096.scope - libcontainer container ca5bc0c34d24c9f1868944e36064895b9d6a3c8e2251b1c54f28ac3291b3e096. Aug 13 00:11:21.753626 containerd[1430]: time="2025-08-13T00:11:21.753581530Z" level=info msg="StartContainer for \"ca5bc0c34d24c9f1868944e36064895b9d6a3c8e2251b1c54f28ac3291b3e096\" returns successfully" Aug 13 00:11:22.585690 kubelet[2484]: E0813 00:11:22.585654 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:22.606238 kubelet[2484]: I0813 00:11:22.606035 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-t7dqq" podStartSLOduration=1.773219763 podStartE2EDuration="3.606017781s" podCreationTimestamp="2025-08-13 00:11:19 +0000 UTC" firstStartedPulling="2025-08-13 00:11:19.812390422 +0000 UTC m=+7.351155432" lastFinishedPulling="2025-08-13 00:11:21.64518844 +0000 UTC m=+9.183953450" observedRunningTime="2025-08-13 00:11:22.605971722 +0000 UTC m=+10.144736772" watchObservedRunningTime="2025-08-13 00:11:22.606017781 +0000 UTC m=+10.144782831" Aug 13 00:11:26.043441 update_engine[1422]: I20250813 00:11:26.043375 1422 update_attempter.cc:509] Updating boot flags... Aug 13 00:11:26.065377 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2878) Aug 13 00:11:26.111562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2877) Aug 13 00:11:27.127118 sudo[1615]: pam_unix(sudo:session): session closed for user root Aug 13 00:11:27.144172 sshd[1612]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:27.151125 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:60178.service: Deactivated successfully. Aug 13 00:11:27.153762 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:11:27.153938 systemd[1]: session-7.scope: Consumed 7.732s CPU time, 157.9M memory peak, 0B memory swap peak. Aug 13 00:11:27.154705 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:11:27.155834 systemd-logind[1418]: Removed session 7. Aug 13 00:11:32.063361 systemd[1]: Created slice kubepods-besteffort-pod646580f5_5261_4937_a2b8_d44088b9d1a0.slice - libcontainer container kubepods-besteffort-pod646580f5_5261_4937_a2b8_d44088b9d1a0.slice. Aug 13 00:11:32.167147 kubelet[2484]: I0813 00:11:32.167093 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/646580f5-5261-4937-a2b8-d44088b9d1a0-tigera-ca-bundle\") pod \"calico-typha-69b4df77bb-qplh5\" (UID: \"646580f5-5261-4937-a2b8-d44088b9d1a0\") " pod="calico-system/calico-typha-69b4df77bb-qplh5" Aug 13 00:11:32.167147 kubelet[2484]: I0813 00:11:32.167149 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/646580f5-5261-4937-a2b8-d44088b9d1a0-typha-certs\") pod \"calico-typha-69b4df77bb-qplh5\" (UID: \"646580f5-5261-4937-a2b8-d44088b9d1a0\") " pod="calico-system/calico-typha-69b4df77bb-qplh5" Aug 13 00:11:32.167619 kubelet[2484]: I0813 00:11:32.167171 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsp7\" (UniqueName: \"kubernetes.io/projected/646580f5-5261-4937-a2b8-d44088b9d1a0-kube-api-access-5xsp7\") pod \"calico-typha-69b4df77bb-qplh5\" (UID: \"646580f5-5261-4937-a2b8-d44088b9d1a0\") " pod="calico-system/calico-typha-69b4df77bb-qplh5" Aug 13 00:11:32.352528 systemd[1]: Created slice kubepods-besteffort-pod9a1ba16a_72f6_41aa_b998_a0bac2eb656c.slice - libcontainer container kubepods-besteffort-pod9a1ba16a_72f6_41aa_b998_a0bac2eb656c.slice. Aug 13 00:11:32.366797 kubelet[2484]: E0813 00:11:32.366751 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:32.369077 kubelet[2484]: I0813 00:11:32.369050 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-cni-net-dir\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369149 kubelet[2484]: I0813 00:11:32.369086 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vktnj\" (UniqueName: \"kubernetes.io/projected/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-kube-api-access-vktnj\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369149 kubelet[2484]: I0813 00:11:32.369110 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-xtables-lock\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369149 kubelet[2484]: I0813 00:11:32.369126 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-cni-bin-dir\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369149 kubelet[2484]: I0813 00:11:32.369140 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-lib-modules\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369261 kubelet[2484]: I0813 00:11:32.369154 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-tigera-ca-bundle\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369261 kubelet[2484]: I0813 00:11:32.369168 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-policysync\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369261 kubelet[2484]: I0813 00:11:32.369181 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-var-lib-calico\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369261 kubelet[2484]: I0813 00:11:32.369197 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-cni-log-dir\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369261 kubelet[2484]: I0813 00:11:32.369212 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-var-run-calico\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369435 kubelet[2484]: I0813 00:11:32.369228 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-flexvol-driver-host\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.369435 kubelet[2484]: I0813 00:11:32.369242 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9a1ba16a-72f6-41aa-b998-a0bac2eb656c-node-certs\") pod \"calico-node-hd7fs\" (UID: \"9a1ba16a-72f6-41aa-b998-a0bac2eb656c\") " pod="calico-system/calico-node-hd7fs" Aug 13 00:11:32.371752 containerd[1430]: time="2025-08-13T00:11:32.371624397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69b4df77bb-qplh5,Uid:646580f5-5261-4937-a2b8-d44088b9d1a0,Namespace:calico-system,Attempt:0,}" Aug 13 00:11:32.435794 containerd[1430]: time="2025-08-13T00:11:32.435525350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:32.435794 containerd[1430]: time="2025-08-13T00:11:32.435587406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:32.435794 containerd[1430]: time="2025-08-13T00:11:32.435598889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:32.435794 containerd[1430]: time="2025-08-13T00:11:32.435675428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:32.456619 systemd[1]: Started cri-containerd-f2254caf73385e4e24ff909858bff05ed77b8e2b659fe4d5021e5a86d076a8d0.scope - libcontainer container f2254caf73385e4e24ff909858bff05ed77b8e2b659fe4d5021e5a86d076a8d0. Aug 13 00:11:32.478732 kubelet[2484]: E0813 00:11:32.478669 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.478732 kubelet[2484]: W0813 00:11:32.478699 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.479857 kubelet[2484]: E0813 00:11:32.479823 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.480180 kubelet[2484]: E0813 00:11:32.480130 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.480230 kubelet[2484]: W0813 00:11:32.480201 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.480230 kubelet[2484]: E0813 00:11:32.480219 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.480713 kubelet[2484]: E0813 00:11:32.480692 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.480713 kubelet[2484]: W0813 00:11:32.480710 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.480785 kubelet[2484]: E0813 00:11:32.480724 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.481169 kubelet[2484]: E0813 00:11:32.481147 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.481169 kubelet[2484]: W0813 00:11:32.481165 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.481234 kubelet[2484]: E0813 00:11:32.481177 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.481739 kubelet[2484]: E0813 00:11:32.481714 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.481739 kubelet[2484]: W0813 00:11:32.481731 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.481739 kubelet[2484]: E0813 00:11:32.481742 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.482210 kubelet[2484]: E0813 00:11:32.481980 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.482210 kubelet[2484]: W0813 00:11:32.481997 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.482210 kubelet[2484]: E0813 00:11:32.482007 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.482599 kubelet[2484]: E0813 00:11:32.482267 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.482599 kubelet[2484]: W0813 00:11:32.482371 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.484074 kubelet[2484]: E0813 00:11:32.482386 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.484074 kubelet[2484]: E0813 00:11:32.483562 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.484074 kubelet[2484]: W0813 00:11:32.483634 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.484074 kubelet[2484]: E0813 00:11:32.483648 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.484074 kubelet[2484]: E0813 00:11:32.483992 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.484074 kubelet[2484]: W0813 00:11:32.484005 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.484074 kubelet[2484]: E0813 00:11:32.484015 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.484555 kubelet[2484]: E0813 00:11:32.484516 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.484555 kubelet[2484]: W0813 00:11:32.484536 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.484555 kubelet[2484]: E0813 00:11:32.484548 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.485427 kubelet[2484]: E0813 00:11:32.485382 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.485427 kubelet[2484]: W0813 00:11:32.485399 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.485427 kubelet[2484]: E0813 00:11:32.485412 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.485704 kubelet[2484]: E0813 00:11:32.485638 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.485704 kubelet[2484]: W0813 00:11:32.485647 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.485704 kubelet[2484]: E0813 00:11:32.485655 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.486639 kubelet[2484]: E0813 00:11:32.486610 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.486639 kubelet[2484]: W0813 00:11:32.486631 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.486728 kubelet[2484]: E0813 00:11:32.486649 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.486965 kubelet[2484]: E0813 00:11:32.486949 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.486998 kubelet[2484]: W0813 00:11:32.486965 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.487173 kubelet[2484]: E0813 00:11:32.487156 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.487810 kubelet[2484]: E0813 00:11:32.487765 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.488538 kubelet[2484]: W0813 00:11:32.488441 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.488596 kubelet[2484]: E0813 00:11:32.488542 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.489019 kubelet[2484]: E0813 00:11:32.488987 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.489019 kubelet[2484]: W0813 00:11:32.489007 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.489019 kubelet[2484]: E0813 00:11:32.489018 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.489901 kubelet[2484]: E0813 00:11:32.489815 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.489901 kubelet[2484]: W0813 00:11:32.489900 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.490031 kubelet[2484]: E0813 00:11:32.489916 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.490593 kubelet[2484]: E0813 00:11:32.490552 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.490622 kubelet[2484]: W0813 00:11:32.490605 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.490622 kubelet[2484]: E0813 00:11:32.490618 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.491351 kubelet[2484]: E0813 00:11:32.491314 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.491412 kubelet[2484]: W0813 00:11:32.491335 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.491412 kubelet[2484]: E0813 00:11:32.491371 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.493001 kubelet[2484]: E0813 00:11:32.491587 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.493001 kubelet[2484]: W0813 00:11:32.491602 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.493001 kubelet[2484]: E0813 00:11:32.491689 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.493001 kubelet[2484]: E0813 00:11:32.491911 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.493001 kubelet[2484]: W0813 00:11:32.491921 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.493001 kubelet[2484]: E0813 00:11:32.491934 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.493001 kubelet[2484]: E0813 00:11:32.492403 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.493001 kubelet[2484]: W0813 00:11:32.492417 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.493001 kubelet[2484]: E0813 00:11:32.492430 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.493001 kubelet[2484]: E0813 00:11:32.492691 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.493257 kubelet[2484]: W0813 00:11:32.492701 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.493257 kubelet[2484]: E0813 00:11:32.492709 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.493257 kubelet[2484]: E0813 00:11:32.492960 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.493257 kubelet[2484]: W0813 00:11:32.492969 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.493257 kubelet[2484]: E0813 00:11:32.492978 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.494147 kubelet[2484]: E0813 00:11:32.494082 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.494147 kubelet[2484]: W0813 00:11:32.494123 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.494147 kubelet[2484]: E0813 00:11:32.494136 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.494586 kubelet[2484]: E0813 00:11:32.494528 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.494586 kubelet[2484]: W0813 00:11:32.494571 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.494586 kubelet[2484]: E0813 00:11:32.494588 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.495482 kubelet[2484]: E0813 00:11:32.495404 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.495482 kubelet[2484]: W0813 00:11:32.495470 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.495482 kubelet[2484]: E0813 00:11:32.495484 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.496130 kubelet[2484]: E0813 00:11:32.496043 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.496130 kubelet[2484]: W0813 00:11:32.496114 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.496130 kubelet[2484]: E0813 00:11:32.496128 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.496618 kubelet[2484]: E0813 00:11:32.496435 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.496618 kubelet[2484]: W0813 00:11:32.496451 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.496618 kubelet[2484]: E0813 00:11:32.496463 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.496998 kubelet[2484]: E0813 00:11:32.496970 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.496998 kubelet[2484]: W0813 00:11:32.496986 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.496998 kubelet[2484]: E0813 00:11:32.496998 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.501277 containerd[1430]: time="2025-08-13T00:11:32.501228921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69b4df77bb-qplh5,Uid:646580f5-5261-4937-a2b8-d44088b9d1a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2254caf73385e4e24ff909858bff05ed77b8e2b659fe4d5021e5a86d076a8d0\"" Aug 13 00:11:32.502356 kubelet[2484]: E0813 00:11:32.502324 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:32.503207 containerd[1430]: time="2025-08-13T00:11:32.503184658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:11:32.513524 kubelet[2484]: E0813 00:11:32.513444 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.513524 kubelet[2484]: W0813 00:11:32.513468 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.513524 kubelet[2484]: E0813 00:11:32.513487 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.594450 kubelet[2484]: E0813 00:11:32.594186 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqv9j" podUID="e81ec000-f2d6-44b6-854d-59a730f62e7e" Aug 13 00:11:32.653127 kubelet[2484]: E0813 00:11:32.652936 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.653127 kubelet[2484]: W0813 00:11:32.652957 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.653127 kubelet[2484]: E0813 00:11:32.652979 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.653283 kubelet[2484]: E0813 00:11:32.653196 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.662665 kubelet[2484]: W0813 00:11:32.653205 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.662665 kubelet[2484]: E0813 00:11:32.662563 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.663087 kubelet[2484]: E0813 00:11:32.662864 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.663087 kubelet[2484]: W0813 00:11:32.662876 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.663087 kubelet[2484]: E0813 00:11:32.662887 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.663166 kubelet[2484]: E0813 00:11:32.663092 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.663166 kubelet[2484]: W0813 00:11:32.663100 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.663166 kubelet[2484]: E0813 00:11:32.663108 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.663293 kubelet[2484]: E0813 00:11:32.663281 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.663293 kubelet[2484]: W0813 00:11:32.663291 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.663359 kubelet[2484]: E0813 00:11:32.663299 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.663471 kubelet[2484]: E0813 00:11:32.663456 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.663471 kubelet[2484]: W0813 00:11:32.663466 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.663627 kubelet[2484]: E0813 00:11:32.663473 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.663627 kubelet[2484]: E0813 00:11:32.663612 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.663627 kubelet[2484]: W0813 00:11:32.663618 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.663627 kubelet[2484]: E0813 00:11:32.663625 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.663887 kubelet[2484]: E0813 00:11:32.663758 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.663887 kubelet[2484]: W0813 00:11:32.663767 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.663887 kubelet[2484]: E0813 00:11:32.663774 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.664042 kubelet[2484]: E0813 00:11:32.664023 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.664076 kubelet[2484]: W0813 00:11:32.664042 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.664076 kubelet[2484]: E0813 00:11:32.664056 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.664230 kubelet[2484]: E0813 00:11:32.664216 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.664230 kubelet[2484]: W0813 00:11:32.664227 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.664284 kubelet[2484]: E0813 00:11:32.664236 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.664537 kubelet[2484]: E0813 00:11:32.664385 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.664537 kubelet[2484]: W0813 00:11:32.664395 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.664537 kubelet[2484]: E0813 00:11:32.664403 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.664537 kubelet[2484]: E0813 00:11:32.664529 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.664537 kubelet[2484]: W0813 00:11:32.664536 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.664537 kubelet[2484]: E0813 00:11:32.664543 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.664735 containerd[1430]: time="2025-08-13T00:11:32.664385809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hd7fs,Uid:9a1ba16a-72f6-41aa-b998-a0bac2eb656c,Namespace:calico-system,Attempt:0,}" Aug 13 00:11:32.664771 kubelet[2484]: E0813 00:11:32.664669 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.664771 kubelet[2484]: W0813 00:11:32.664676 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.664771 kubelet[2484]: E0813 00:11:32.664682 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.664860 kubelet[2484]: E0813 00:11:32.664822 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.664860 kubelet[2484]: W0813 00:11:32.664830 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.664860 kubelet[2484]: E0813 00:11:32.664843 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.665388 kubelet[2484]: E0813 00:11:32.664977 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.665388 kubelet[2484]: W0813 00:11:32.664987 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.665388 kubelet[2484]: E0813 00:11:32.664994 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.665388 kubelet[2484]: E0813 00:11:32.665227 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.665388 kubelet[2484]: W0813 00:11:32.665233 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.665388 kubelet[2484]: E0813 00:11:32.665240 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.665569 kubelet[2484]: E0813 00:11:32.665426 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.665569 kubelet[2484]: W0813 00:11:32.665434 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.665569 kubelet[2484]: E0813 00:11:32.665441 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.665631 kubelet[2484]: E0813 00:11:32.665577 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.665631 kubelet[2484]: W0813 00:11:32.665583 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.665631 kubelet[2484]: E0813 00:11:32.665590 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.665720 kubelet[2484]: E0813 00:11:32.665706 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.665720 kubelet[2484]: W0813 00:11:32.665716 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.665767 kubelet[2484]: E0813 00:11:32.665722 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.665850 kubelet[2484]: E0813 00:11:32.665837 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.665850 kubelet[2484]: W0813 00:11:32.665846 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.665909 kubelet[2484]: E0813 00:11:32.665854 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.671632 kubelet[2484]: E0813 00:11:32.671598 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.671632 kubelet[2484]: W0813 00:11:32.671626 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.671738 kubelet[2484]: E0813 00:11:32.671644 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.671738 kubelet[2484]: I0813 00:11:32.671672 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e81ec000-f2d6-44b6-854d-59a730f62e7e-varrun\") pod \"csi-node-driver-nqv9j\" (UID: \"e81ec000-f2d6-44b6-854d-59a730f62e7e\") " pod="calico-system/csi-node-driver-nqv9j" Aug 13 00:11:32.672022 kubelet[2484]: E0813 00:11:32.671980 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.672022 kubelet[2484]: W0813 00:11:32.671996 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.672022 kubelet[2484]: E0813 00:11:32.672007 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.672129 kubelet[2484]: I0813 00:11:32.672078 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjdt\" (UniqueName: \"kubernetes.io/projected/e81ec000-f2d6-44b6-854d-59a730f62e7e-kube-api-access-btjdt\") pod \"csi-node-driver-nqv9j\" (UID: \"e81ec000-f2d6-44b6-854d-59a730f62e7e\") " pod="calico-system/csi-node-driver-nqv9j" Aug 13 00:11:32.672318 kubelet[2484]: E0813 00:11:32.672303 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.672439 kubelet[2484]: W0813 00:11:32.672321 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.672439 kubelet[2484]: E0813 00:11:32.672332 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.672552 kubelet[2484]: E0813 00:11:32.672540 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.672552 kubelet[2484]: W0813 00:11:32.672550 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.672611 kubelet[2484]: E0813 00:11:32.672559 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.672738 kubelet[2484]: E0813 00:11:32.672727 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.672738 kubelet[2484]: W0813 00:11:32.672737 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.672794 kubelet[2484]: E0813 00:11:32.672745 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.672794 kubelet[2484]: I0813 00:11:32.672765 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e81ec000-f2d6-44b6-854d-59a730f62e7e-kubelet-dir\") pod \"csi-node-driver-nqv9j\" (UID: \"e81ec000-f2d6-44b6-854d-59a730f62e7e\") " pod="calico-system/csi-node-driver-nqv9j" Aug 13 00:11:32.673732 kubelet[2484]: E0813 00:11:32.673707 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.673732 kubelet[2484]: W0813 00:11:32.673731 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.673791 kubelet[2484]: E0813 00:11:32.673749 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.674033 kubelet[2484]: E0813 00:11:32.674015 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.674033 kubelet[2484]: W0813 00:11:32.674030 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.674126 kubelet[2484]: E0813 00:11:32.674042 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.674323 kubelet[2484]: E0813 00:11:32.674308 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.674323 kubelet[2484]: W0813 00:11:32.674321 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.674411 kubelet[2484]: E0813 00:11:32.674333 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.674525 kubelet[2484]: I0813 00:11:32.674480 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e81ec000-f2d6-44b6-854d-59a730f62e7e-registration-dir\") pod \"csi-node-driver-nqv9j\" (UID: \"e81ec000-f2d6-44b6-854d-59a730f62e7e\") " pod="calico-system/csi-node-driver-nqv9j" Aug 13 00:11:32.674578 kubelet[2484]: E0813 00:11:32.674561 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.674578 kubelet[2484]: W0813 00:11:32.674573 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.674639 kubelet[2484]: E0813 00:11:32.674583 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.674782 kubelet[2484]: E0813 00:11:32.674771 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.674782 kubelet[2484]: W0813 00:11:32.674781 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.674838 kubelet[2484]: E0813 00:11:32.674789 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.674972 kubelet[2484]: E0813 00:11:32.674961 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.675002 kubelet[2484]: W0813 00:11:32.674972 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.675002 kubelet[2484]: E0813 00:11:32.674981 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.675044 kubelet[2484]: I0813 00:11:32.675004 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e81ec000-f2d6-44b6-854d-59a730f62e7e-socket-dir\") pod \"csi-node-driver-nqv9j\" (UID: \"e81ec000-f2d6-44b6-854d-59a730f62e7e\") " pod="calico-system/csi-node-driver-nqv9j" Aug 13 00:11:32.675264 kubelet[2484]: E0813 00:11:32.675243 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.675364 kubelet[2484]: W0813 00:11:32.675264 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.675388 kubelet[2484]: E0813 00:11:32.675373 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.675586 kubelet[2484]: E0813 00:11:32.675574 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.675614 kubelet[2484]: W0813 00:11:32.675586 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.675614 kubelet[2484]: E0813 00:11:32.675596 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.675837 kubelet[2484]: E0813 00:11:32.675822 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.675837 kubelet[2484]: W0813 00:11:32.675835 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.675892 kubelet[2484]: E0813 00:11:32.675846 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.676028 kubelet[2484]: E0813 00:11:32.676005 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.676028 kubelet[2484]: W0813 00:11:32.676022 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.676079 kubelet[2484]: E0813 00:11:32.676031 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.688073 containerd[1430]: time="2025-08-13T00:11:32.687728019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:32.688073 containerd[1430]: time="2025-08-13T00:11:32.687812521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:32.688073 containerd[1430]: time="2025-08-13T00:11:32.687827485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:32.688073 containerd[1430]: time="2025-08-13T00:11:32.688006810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:32.708548 systemd[1]: Started cri-containerd-d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434.scope - libcontainer container d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434. Aug 13 00:11:32.740216 containerd[1430]: time="2025-08-13T00:11:32.740163980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hd7fs,Uid:9a1ba16a-72f6-41aa-b998-a0bac2eb656c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434\"" Aug 13 00:11:32.776623 kubelet[2484]: E0813 00:11:32.776547 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.776623 kubelet[2484]: W0813 00:11:32.776571 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.776623 kubelet[2484]: E0813 00:11:32.776590 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.777078 kubelet[2484]: E0813 00:11:32.777011 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.777078 kubelet[2484]: W0813 00:11:32.777026 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.777078 kubelet[2484]: E0813 00:11:32.777037 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.777440 kubelet[2484]: E0813 00:11:32.777401 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.777440 kubelet[2484]: W0813 00:11:32.777416 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.777440 kubelet[2484]: E0813 00:11:32.777427 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.777880 kubelet[2484]: E0813 00:11:32.777774 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.777880 kubelet[2484]: W0813 00:11:32.777787 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.777880 kubelet[2484]: E0813 00:11:32.777797 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.778226 kubelet[2484]: E0813 00:11:32.778190 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.778226 kubelet[2484]: W0813 00:11:32.778203 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.778226 kubelet[2484]: E0813 00:11:32.778214 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.779367 kubelet[2484]: E0813 00:11:32.779330 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.779367 kubelet[2484]: W0813 00:11:32.779355 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.779367 kubelet[2484]: E0813 00:11:32.779369 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.779623 kubelet[2484]: E0813 00:11:32.779597 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.779623 kubelet[2484]: W0813 00:11:32.779611 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.779623 kubelet[2484]: E0813 00:11:32.779620 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.779857 kubelet[2484]: E0813 00:11:32.779842 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.779857 kubelet[2484]: W0813 00:11:32.779854 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.779918 kubelet[2484]: E0813 00:11:32.779863 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.780078 kubelet[2484]: E0813 00:11:32.780064 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.780078 kubelet[2484]: W0813 00:11:32.780076 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.780145 kubelet[2484]: E0813 00:11:32.780086 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.780379 kubelet[2484]: E0813 00:11:32.780361 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.780408 kubelet[2484]: W0813 00:11:32.780379 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.780408 kubelet[2484]: E0813 00:11:32.780393 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.780580 kubelet[2484]: E0813 00:11:32.780560 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.780580 kubelet[2484]: W0813 00:11:32.780572 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.780580 kubelet[2484]: E0813 00:11:32.780580 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.780733 kubelet[2484]: E0813 00:11:32.780723 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.780733 kubelet[2484]: W0813 00:11:32.780732 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.780780 kubelet[2484]: E0813 00:11:32.780742 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.780925 kubelet[2484]: E0813 00:11:32.780913 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.780925 kubelet[2484]: W0813 00:11:32.780923 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.780991 kubelet[2484]: E0813 00:11:32.780931 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.781094 kubelet[2484]: E0813 00:11:32.781083 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.781094 kubelet[2484]: W0813 00:11:32.781092 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.781172 kubelet[2484]: E0813 00:11:32.781100 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.781272 kubelet[2484]: E0813 00:11:32.781262 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.781305 kubelet[2484]: W0813 00:11:32.781272 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.781305 kubelet[2484]: E0813 00:11:32.781280 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.781477 kubelet[2484]: E0813 00:11:32.781465 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.781477 kubelet[2484]: W0813 00:11:32.781475 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.781543 kubelet[2484]: E0813 00:11:32.781482 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.781648 kubelet[2484]: E0813 00:11:32.781637 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.781648 kubelet[2484]: W0813 00:11:32.781646 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.781690 kubelet[2484]: E0813 00:11:32.781653 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.782479 kubelet[2484]: E0813 00:11:32.782462 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.782479 kubelet[2484]: W0813 00:11:32.782479 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.782565 kubelet[2484]: E0813 00:11:32.782492 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.782773 kubelet[2484]: E0813 00:11:32.782759 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.782801 kubelet[2484]: W0813 00:11:32.782774 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.782801 kubelet[2484]: E0813 00:11:32.782786 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.783031 kubelet[2484]: E0813 00:11:32.782980 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.783068 kubelet[2484]: W0813 00:11:32.783055 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.783100 kubelet[2484]: E0813 00:11:32.783073 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.783443 kubelet[2484]: E0813 00:11:32.783427 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.783443 kubelet[2484]: W0813 00:11:32.783442 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.783566 kubelet[2484]: E0813 00:11:32.783454 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.784049 kubelet[2484]: E0813 00:11:32.783974 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.784085 kubelet[2484]: W0813 00:11:32.784056 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.784085 kubelet[2484]: E0813 00:11:32.784071 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.785074 kubelet[2484]: E0813 00:11:32.785028 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.785123 kubelet[2484]: W0813 00:11:32.785080 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.785123 kubelet[2484]: E0813 00:11:32.785094 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.785365 kubelet[2484]: E0813 00:11:32.785340 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.785394 kubelet[2484]: W0813 00:11:32.785365 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.785394 kubelet[2484]: E0813 00:11:32.785374 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.785762 kubelet[2484]: E0813 00:11:32.785747 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.785790 kubelet[2484]: W0813 00:11:32.785762 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.785790 kubelet[2484]: E0813 00:11:32.785773 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:32.794064 kubelet[2484]: E0813 00:11:32.793899 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:32.794064 kubelet[2484]: W0813 00:11:32.793920 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:32.794064 kubelet[2484]: E0813 00:11:32.793941 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:33.718404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410059059.mount: Deactivated successfully. Aug 13 00:11:34.199996 containerd[1430]: time="2025-08-13T00:11:34.199943046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:34.201712 containerd[1430]: time="2025-08-13T00:11:34.201652243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:11:34.203170 containerd[1430]: time="2025-08-13T00:11:34.203068252Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:34.205857 containerd[1430]: time="2025-08-13T00:11:34.205806529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:34.207645 containerd[1430]: time="2025-08-13T00:11:34.207607067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.70438644s" Aug 13 00:11:34.207876 containerd[1430]: time="2025-08-13T00:11:34.207762464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:11:34.209054 containerd[1430]: time="2025-08-13T00:11:34.208770658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:11:34.225003 containerd[1430]: time="2025-08-13T00:11:34.224959902Z" level=info msg="CreateContainer within sandbox \"f2254caf73385e4e24ff909858bff05ed77b8e2b659fe4d5021e5a86d076a8d0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:11:34.236558 containerd[1430]: time="2025-08-13T00:11:34.236453373Z" level=info msg="CreateContainer within sandbox \"f2254caf73385e4e24ff909858bff05ed77b8e2b659fe4d5021e5a86d076a8d0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"21616de855ffbe3d8eeb014de9a86ce44986d2aca9ecf727b8c998457049dd5d\"" Aug 13 00:11:34.237240 containerd[1430]: time="2025-08-13T00:11:34.237041590Z" level=info msg="StartContainer for \"21616de855ffbe3d8eeb014de9a86ce44986d2aca9ecf727b8c998457049dd5d\"" Aug 13 00:11:34.267587 systemd[1]: Started cri-containerd-21616de855ffbe3d8eeb014de9a86ce44986d2aca9ecf727b8c998457049dd5d.scope - libcontainer container 21616de855ffbe3d8eeb014de9a86ce44986d2aca9ecf727b8c998457049dd5d. Aug 13 00:11:34.311395 containerd[1430]: time="2025-08-13T00:11:34.310641900Z" level=info msg="StartContainer for \"21616de855ffbe3d8eeb014de9a86ce44986d2aca9ecf727b8c998457049dd5d\" returns successfully" Aug 13 00:11:34.554859 kubelet[2484]: E0813 00:11:34.554698 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqv9j" podUID="e81ec000-f2d6-44b6-854d-59a730f62e7e" Aug 13 00:11:34.624825 kubelet[2484]: E0813 00:11:34.624777 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:34.647865 kubelet[2484]: I0813 00:11:34.647546 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69b4df77bb-qplh5" podStartSLOduration=0.941888716 podStartE2EDuration="2.647530257s" podCreationTimestamp="2025-08-13 00:11:32 +0000 UTC" firstStartedPulling="2025-08-13 00:11:32.502843091 +0000 UTC m=+20.041608141" lastFinishedPulling="2025-08-13 00:11:34.208484632 +0000 UTC m=+21.747249682" observedRunningTime="2025-08-13 00:11:34.646560552 +0000 UTC m=+22.185325602" watchObservedRunningTime="2025-08-13 00:11:34.647530257 +0000 UTC m=+22.186295307" Aug 13 00:11:34.677637 kubelet[2484]: E0813 00:11:34.677582 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.677808 kubelet[2484]: W0813 00:11:34.677785 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.677842 kubelet[2484]: E0813 00:11:34.677817 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.678184 kubelet[2484]: E0813 00:11:34.678144 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.678239 kubelet[2484]: W0813 00:11:34.678164 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.678239 kubelet[2484]: E0813 00:11:34.678220 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.678655 kubelet[2484]: E0813 00:11:34.678622 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.678655 kubelet[2484]: W0813 00:11:34.678642 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.678655 kubelet[2484]: E0813 00:11:34.678654 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.678969 kubelet[2484]: E0813 00:11:34.678940 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.678969 kubelet[2484]: W0813 00:11:34.678957 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.679029 kubelet[2484]: E0813 00:11:34.678969 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.679756 kubelet[2484]: E0813 00:11:34.679719 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.679814 kubelet[2484]: W0813 00:11:34.679773 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.679814 kubelet[2484]: E0813 00:11:34.679789 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.680520 kubelet[2484]: E0813 00:11:34.680499 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.680709 kubelet[2484]: W0813 00:11:34.680692 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.680750 kubelet[2484]: E0813 00:11:34.680713 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.681229 kubelet[2484]: E0813 00:11:34.681211 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.681229 kubelet[2484]: W0813 00:11:34.681227 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.681296 kubelet[2484]: E0813 00:11:34.681239 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.681699 kubelet[2484]: E0813 00:11:34.681680 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.681699 kubelet[2484]: W0813 00:11:34.681698 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.681950 kubelet[2484]: E0813 00:11:34.681777 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.682155 kubelet[2484]: E0813 00:11:34.682137 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.682155 kubelet[2484]: W0813 00:11:34.682153 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.682218 kubelet[2484]: E0813 00:11:34.682163 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.682439 kubelet[2484]: E0813 00:11:34.682424 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.682439 kubelet[2484]: W0813 00:11:34.682436 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.682510 kubelet[2484]: E0813 00:11:34.682444 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.682683 kubelet[2484]: E0813 00:11:34.682663 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.682683 kubelet[2484]: W0813 00:11:34.682680 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.682744 kubelet[2484]: E0813 00:11:34.682690 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.682890 kubelet[2484]: E0813 00:11:34.682872 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.682890 kubelet[2484]: W0813 00:11:34.682881 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.682890 kubelet[2484]: E0813 00:11:34.682889 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.683119 kubelet[2484]: E0813 00:11:34.683104 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.683401 kubelet[2484]: W0813 00:11:34.683117 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.683451 kubelet[2484]: E0813 00:11:34.683406 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.684613 kubelet[2484]: E0813 00:11:34.684588 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.684613 kubelet[2484]: W0813 00:11:34.684609 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.684727 kubelet[2484]: E0813 00:11:34.684622 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.685433 kubelet[2484]: E0813 00:11:34.685403 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.685433 kubelet[2484]: W0813 00:11:34.685420 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.685433 kubelet[2484]: E0813 00:11:34.685432 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.697992 kubelet[2484]: E0813 00:11:34.697959 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.697992 kubelet[2484]: W0813 00:11:34.697981 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.697992 kubelet[2484]: E0813 00:11:34.698002 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.698216 kubelet[2484]: E0813 00:11:34.698199 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.698216 kubelet[2484]: W0813 00:11:34.698209 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.698324 kubelet[2484]: E0813 00:11:34.698216 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.698448 kubelet[2484]: E0813 00:11:34.698434 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.698448 kubelet[2484]: W0813 00:11:34.698444 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.698530 kubelet[2484]: E0813 00:11:34.698452 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.698702 kubelet[2484]: E0813 00:11:34.698681 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.698702 kubelet[2484]: W0813 00:11:34.698695 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.698702 kubelet[2484]: E0813 00:11:34.698705 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.699831 kubelet[2484]: E0813 00:11:34.698856 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.699831 kubelet[2484]: W0813 00:11:34.698863 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.699831 kubelet[2484]: E0813 00:11:34.698871 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.699831 kubelet[2484]: E0813 00:11:34.699009 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.699831 kubelet[2484]: W0813 00:11:34.699017 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.699831 kubelet[2484]: E0813 00:11:34.699025 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.699831 kubelet[2484]: E0813 00:11:34.699595 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.699831 kubelet[2484]: W0813 00:11:34.699610 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.699831 kubelet[2484]: E0813 00:11:34.699623 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.700045 kubelet[2484]: E0813 00:11:34.699966 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.700045 kubelet[2484]: W0813 00:11:34.699980 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.700045 kubelet[2484]: E0813 00:11:34.699994 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.700284 kubelet[2484]: E0813 00:11:34.700258 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.700284 kubelet[2484]: W0813 00:11:34.700273 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.700284 kubelet[2484]: E0813 00:11:34.700285 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.700543 kubelet[2484]: E0813 00:11:34.700529 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.700543 kubelet[2484]: W0813 00:11:34.700542 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.700616 kubelet[2484]: E0813 00:11:34.700553 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.701110 kubelet[2484]: E0813 00:11:34.701091 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.701110 kubelet[2484]: W0813 00:11:34.701107 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.701252 kubelet[2484]: E0813 00:11:34.701118 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.701400 kubelet[2484]: E0813 00:11:34.701384 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.701400 kubelet[2484]: W0813 00:11:34.701399 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.701935 kubelet[2484]: E0813 00:11:34.701533 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.702547 kubelet[2484]: E0813 00:11:34.702523 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.702547 kubelet[2484]: W0813 00:11:34.702544 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.702937 kubelet[2484]: E0813 00:11:34.702562 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.702937 kubelet[2484]: E0813 00:11:34.702818 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.702937 kubelet[2484]: W0813 00:11:34.702829 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.702937 kubelet[2484]: E0813 00:11:34.702839 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.703034 kubelet[2484]: E0813 00:11:34.703021 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.703034 kubelet[2484]: W0813 00:11:34.703030 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.703079 kubelet[2484]: E0813 00:11:34.703040 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.703298 kubelet[2484]: E0813 00:11:34.703283 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.703298 kubelet[2484]: W0813 00:11:34.703297 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.703390 kubelet[2484]: E0813 00:11:34.703308 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.703763 kubelet[2484]: E0813 00:11:34.703697 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.703763 kubelet[2484]: W0813 00:11:34.703717 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.703763 kubelet[2484]: E0813 00:11:34.703733 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:34.704459 kubelet[2484]: E0813 00:11:34.704441 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:11:34.704459 kubelet[2484]: W0813 00:11:34.704457 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:11:34.704555 kubelet[2484]: E0813 00:11:34.704470 2484 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:11:35.473853 containerd[1430]: time="2025-08-13T00:11:35.473791587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:35.474799 containerd[1430]: time="2025-08-13T00:11:35.474631975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:11:35.476100 containerd[1430]: time="2025-08-13T00:11:35.475454918Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:35.479027 containerd[1430]: time="2025-08-13T00:11:35.478973421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:35.479915 containerd[1430]: time="2025-08-13T00:11:35.479874862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.271060794s" Aug 13 00:11:35.479915 containerd[1430]: time="2025-08-13T00:11:35.479915671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:11:35.484405 containerd[1430]: time="2025-08-13T00:11:35.484366102Z" level=info msg="CreateContainer within sandbox \"d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:11:35.496417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782857928.mount: Deactivated successfully. Aug 13 00:11:35.500973 containerd[1430]: time="2025-08-13T00:11:35.500832569Z" level=info msg="CreateContainer within sandbox \"d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226\"" Aug 13 00:11:35.502249 containerd[1430]: time="2025-08-13T00:11:35.501791543Z" level=info msg="StartContainer for \"c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226\"" Aug 13 00:11:35.544600 systemd[1]: Started cri-containerd-c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226.scope - libcontainer container c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226. Aug 13 00:11:35.618165 systemd[1]: cri-containerd-c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226.scope: Deactivated successfully. Aug 13 00:11:35.659613 containerd[1430]: time="2025-08-13T00:11:35.658807428Z" level=info msg="StartContainer for \"c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226\" returns successfully" Aug 13 00:11:35.663894 kubelet[2484]: E0813 00:11:35.663845 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:35.685362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226-rootfs.mount: Deactivated successfully. Aug 13 00:11:35.735906 containerd[1430]: time="2025-08-13T00:11:35.734444432Z" level=info msg="shim disconnected" id=c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226 namespace=k8s.io Aug 13 00:11:35.735906 containerd[1430]: time="2025-08-13T00:11:35.735812016Z" level=warning msg="cleaning up after shim disconnected" id=c31d23d77abca3f78e4bb00700a67f0ab1f5d9c7e89ac64f4b4502beabd66226 namespace=k8s.io Aug 13 00:11:35.735906 containerd[1430]: time="2025-08-13T00:11:35.735831541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:36.557243 kubelet[2484]: E0813 00:11:36.557019 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqv9j" podUID="e81ec000-f2d6-44b6-854d-59a730f62e7e" Aug 13 00:11:36.667043 kubelet[2484]: E0813 00:11:36.666192 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:36.668651 containerd[1430]: time="2025-08-13T00:11:36.668614491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:11:38.554182 kubelet[2484]: E0813 00:11:38.554047 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqv9j" podUID="e81ec000-f2d6-44b6-854d-59a730f62e7e" Aug 13 00:11:39.859569 containerd[1430]: time="2025-08-13T00:11:39.859529596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:39.860245 containerd[1430]: time="2025-08-13T00:11:39.860215406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:11:39.861005 containerd[1430]: time="2025-08-13T00:11:39.860967588Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:39.863525 containerd[1430]: time="2025-08-13T00:11:39.863284347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:39.864323 containerd[1430]: time="2025-08-13T00:11:39.864204881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.195547021s" Aug 13 00:11:39.864323 containerd[1430]: time="2025-08-13T00:11:39.864243968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:11:39.878302 containerd[1430]: time="2025-08-13T00:11:39.878258181Z" level=info msg="CreateContainer within sandbox \"d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:11:39.892323 containerd[1430]: time="2025-08-13T00:11:39.892269753Z" level=info msg="CreateContainer within sandbox \"d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667\"" Aug 13 00:11:39.893152 containerd[1430]: time="2025-08-13T00:11:39.893114713Z" level=info msg="StartContainer for \"6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667\"" Aug 13 00:11:39.922544 systemd[1]: Started cri-containerd-6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667.scope - libcontainer container 6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667. Aug 13 00:11:39.948559 containerd[1430]: time="2025-08-13T00:11:39.946722701Z" level=info msg="StartContainer for \"6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667\" returns successfully" Aug 13 00:11:40.554271 kubelet[2484]: E0813 00:11:40.554206 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nqv9j" podUID="e81ec000-f2d6-44b6-854d-59a730f62e7e" Aug 13 00:11:40.581571 systemd[1]: cri-containerd-6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667.scope: Deactivated successfully. Aug 13 00:11:40.602192 kubelet[2484]: I0813 00:11:40.602162 2484 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:11:40.606546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667-rootfs.mount: Deactivated successfully. Aug 13 00:11:40.612353 containerd[1430]: time="2025-08-13T00:11:40.612109361Z" level=info msg="shim disconnected" id=6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667 namespace=k8s.io Aug 13 00:11:40.612353 containerd[1430]: time="2025-08-13T00:11:40.612168212Z" level=warning msg="cleaning up after shim disconnected" id=6d29dc345676ce78fe8288b56822260191c25a94b04587fe6a9bc7da0d83b667 namespace=k8s.io Aug 13 00:11:40.612353 containerd[1430]: time="2025-08-13T00:11:40.612176214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:40.652389 systemd[1]: Created slice kubepods-burstable-pod9b6c872d_33f8_4452_b725_41047a59fd6c.slice - libcontainer container kubepods-burstable-pod9b6c872d_33f8_4452_b725_41047a59fd6c.slice. Aug 13 00:11:40.666828 systemd[1]: Created slice kubepods-burstable-podc774b47c_e08c_42ad_b562_dd791cc0ed35.slice - libcontainer container kubepods-burstable-podc774b47c_e08c_42ad_b562_dd791cc0ed35.slice. Aug 13 00:11:40.678310 systemd[1]: Created slice kubepods-besteffort-pod559a4267_56cf_459b_a0e0_15a1cc2cb395.slice - libcontainer container kubepods-besteffort-pod559a4267_56cf_459b_a0e0_15a1cc2cb395.slice. Aug 13 00:11:40.682601 containerd[1430]: time="2025-08-13T00:11:40.682559398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:11:40.685673 systemd[1]: Created slice kubepods-besteffort-pod975cc5f0_0666_4e02_aeb2_c4aaa10bc520.slice - libcontainer container kubepods-besteffort-pod975cc5f0_0666_4e02_aeb2_c4aaa10bc520.slice. Aug 13 00:11:40.707747 systemd[1]: Created slice kubepods-besteffort-pod73c192b0_5021_43fd_851e_5152f889105e.slice - libcontainer container kubepods-besteffort-pod73c192b0_5021_43fd_851e_5152f889105e.slice. Aug 13 00:11:40.716203 systemd[1]: Created slice kubepods-besteffort-pod94cfa85d_0b82_444c_ba96_be8ce7895a84.slice - libcontainer container kubepods-besteffort-pod94cfa85d_0b82_444c_ba96_be8ce7895a84.slice. Aug 13 00:11:40.721082 systemd[1]: Created slice kubepods-besteffort-podfbdf544e_e157_4095_9b30_e5d9130445c2.slice - libcontainer container kubepods-besteffort-podfbdf544e_e157_4095_9b30_e5d9130445c2.slice. Aug 13 00:11:40.748757 kubelet[2484]: I0813 00:11:40.748575 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbdf\" (UniqueName: \"kubernetes.io/projected/9b6c872d-33f8-4452-b725-41047a59fd6c-kube-api-access-vfbdf\") pod \"coredns-674b8bbfcf-f2bhn\" (UID: \"9b6c872d-33f8-4452-b725-41047a59fd6c\") " pod="kube-system/coredns-674b8bbfcf-f2bhn" Aug 13 00:11:40.749008 kubelet[2484]: I0813 00:11:40.748766 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b6c872d-33f8-4452-b725-41047a59fd6c-config-volume\") pod \"coredns-674b8bbfcf-f2bhn\" (UID: \"9b6c872d-33f8-4452-b725-41047a59fd6c\") " pod="kube-system/coredns-674b8bbfcf-f2bhn" Aug 13 00:11:40.853457 kubelet[2484]: I0813 00:11:40.852861 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rx4\" (UniqueName: \"kubernetes.io/projected/73c192b0-5021-43fd-851e-5152f889105e-kube-api-access-f8rx4\") pod \"calico-apiserver-69bf9ff8c6-ds8gn\" (UID: \"73c192b0-5021-43fd-851e-5152f889105e\") " pod="calico-apiserver/calico-apiserver-69bf9ff8c6-ds8gn" Aug 13 00:11:40.853457 kubelet[2484]: I0813 00:11:40.852916 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbdf544e-e157-4095-9b30-e5d9130445c2-config\") pod \"goldmane-768f4c5c69-tg9t5\" (UID: \"fbdf544e-e157-4095-9b30-e5d9130445c2\") " pod="calico-system/goldmane-768f4c5c69-tg9t5" Aug 13 00:11:40.853457 kubelet[2484]: I0813 00:11:40.852939 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbdf544e-e157-4095-9b30-e5d9130445c2-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-tg9t5\" (UID: \"fbdf544e-e157-4095-9b30-e5d9130445c2\") " pod="calico-system/goldmane-768f4c5c69-tg9t5" Aug 13 00:11:40.853457 kubelet[2484]: I0813 00:11:40.852965 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73c192b0-5021-43fd-851e-5152f889105e-calico-apiserver-certs\") pod \"calico-apiserver-69bf9ff8c6-ds8gn\" (UID: \"73c192b0-5021-43fd-851e-5152f889105e\") " pod="calico-apiserver/calico-apiserver-69bf9ff8c6-ds8gn" Aug 13 00:11:40.853457 kubelet[2484]: I0813 00:11:40.852990 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94cfa85d-0b82-444c-ba96-be8ce7895a84-tigera-ca-bundle\") pod \"calico-kube-controllers-54446d6f8c-zdlwk\" (UID: \"94cfa85d-0b82-444c-ba96-be8ce7895a84\") " pod="calico-system/calico-kube-controllers-54446d6f8c-zdlwk" Aug 13 00:11:40.853825 kubelet[2484]: I0813 00:11:40.853027 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58dnn\" (UniqueName: \"kubernetes.io/projected/94cfa85d-0b82-444c-ba96-be8ce7895a84-kube-api-access-58dnn\") pod \"calico-kube-controllers-54446d6f8c-zdlwk\" (UID: \"94cfa85d-0b82-444c-ba96-be8ce7895a84\") " pod="calico-system/calico-kube-controllers-54446d6f8c-zdlwk" Aug 13 00:11:40.853825 kubelet[2484]: I0813 00:11:40.853056 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgwv\" (UniqueName: \"kubernetes.io/projected/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-kube-api-access-nlgwv\") pod \"whisker-754c986cf8-jmd7h\" (UID: \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\") " pod="calico-system/whisker-754c986cf8-jmd7h" Aug 13 00:11:40.853825 kubelet[2484]: I0813 00:11:40.853097 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk48b\" (UniqueName: \"kubernetes.io/projected/fbdf544e-e157-4095-9b30-e5d9130445c2-kube-api-access-pk48b\") pod \"goldmane-768f4c5c69-tg9t5\" (UID: \"fbdf544e-e157-4095-9b30-e5d9130445c2\") " pod="calico-system/goldmane-768f4c5c69-tg9t5" Aug 13 00:11:40.853825 kubelet[2484]: I0813 00:11:40.853150 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm5bb\" (UniqueName: \"kubernetes.io/projected/559a4267-56cf-459b-a0e0-15a1cc2cb395-kube-api-access-dm5bb\") pod \"calico-apiserver-69bf9ff8c6-mp76l\" (UID: \"559a4267-56cf-459b-a0e0-15a1cc2cb395\") " pod="calico-apiserver/calico-apiserver-69bf9ff8c6-mp76l" Aug 13 00:11:40.853825 kubelet[2484]: I0813 00:11:40.853174 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-whisker-backend-key-pair\") pod \"whisker-754c986cf8-jmd7h\" (UID: \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\") " pod="calico-system/whisker-754c986cf8-jmd7h" Aug 13 00:11:40.854105 kubelet[2484]: I0813 00:11:40.853190 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-whisker-ca-bundle\") pod \"whisker-754c986cf8-jmd7h\" (UID: \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\") " pod="calico-system/whisker-754c986cf8-jmd7h" Aug 13 00:11:40.854105 kubelet[2484]: I0813 00:11:40.853214 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vwp\" (UniqueName: \"kubernetes.io/projected/c774b47c-e08c-42ad-b562-dd791cc0ed35-kube-api-access-s2vwp\") pod \"coredns-674b8bbfcf-lh4nv\" (UID: \"c774b47c-e08c-42ad-b562-dd791cc0ed35\") " pod="kube-system/coredns-674b8bbfcf-lh4nv" Aug 13 00:11:40.854105 kubelet[2484]: I0813 00:11:40.853235 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fbdf544e-e157-4095-9b30-e5d9130445c2-goldmane-key-pair\") pod \"goldmane-768f4c5c69-tg9t5\" (UID: \"fbdf544e-e157-4095-9b30-e5d9130445c2\") " pod="calico-system/goldmane-768f4c5c69-tg9t5" Aug 13 00:11:40.854105 kubelet[2484]: I0813 00:11:40.853259 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/559a4267-56cf-459b-a0e0-15a1cc2cb395-calico-apiserver-certs\") pod \"calico-apiserver-69bf9ff8c6-mp76l\" (UID: \"559a4267-56cf-459b-a0e0-15a1cc2cb395\") " pod="calico-apiserver/calico-apiserver-69bf9ff8c6-mp76l" Aug 13 00:11:40.854105 kubelet[2484]: I0813 00:11:40.853298 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c774b47c-e08c-42ad-b562-dd791cc0ed35-config-volume\") pod \"coredns-674b8bbfcf-lh4nv\" (UID: \"c774b47c-e08c-42ad-b562-dd791cc0ed35\") " pod="kube-system/coredns-674b8bbfcf-lh4nv" Aug 13 00:11:40.961281 kubelet[2484]: E0813 00:11:40.960629 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:40.963994 containerd[1430]: time="2025-08-13T00:11:40.963945670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2bhn,Uid:9b6c872d-33f8-4452-b725-41047a59fd6c,Namespace:kube-system,Attempt:0,}" Aug 13 00:11:41.008723 containerd[1430]: time="2025-08-13T00:11:41.008675325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754c986cf8-jmd7h,Uid:975cc5f0-0666-4e02-aeb2-c4aaa10bc520,Namespace:calico-system,Attempt:0,}" Aug 13 00:11:41.011600 containerd[1430]: time="2025-08-13T00:11:41.011552790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bf9ff8c6-ds8gn,Uid:73c192b0-5021-43fd-851e-5152f889105e,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:11:41.029599 containerd[1430]: time="2025-08-13T00:11:41.029449532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tg9t5,Uid:fbdf544e-e157-4095-9b30-e5d9130445c2,Namespace:calico-system,Attempt:0,}" Aug 13 00:11:41.036968 containerd[1430]: time="2025-08-13T00:11:41.035289597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54446d6f8c-zdlwk,Uid:94cfa85d-0b82-444c-ba96-be8ce7895a84,Namespace:calico-system,Attempt:0,}" Aug 13 00:11:41.279555 kubelet[2484]: E0813 00:11:41.279374 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:41.286368 containerd[1430]: time="2025-08-13T00:11:41.285863270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bf9ff8c6-mp76l,Uid:559a4267-56cf-459b-a0e0-15a1cc2cb395,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:11:41.311747 containerd[1430]: time="2025-08-13T00:11:41.311677482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lh4nv,Uid:c774b47c-e08c-42ad-b562-dd791cc0ed35,Namespace:kube-system,Attempt:0,}" Aug 13 00:11:41.359041 containerd[1430]: time="2025-08-13T00:11:41.358975146Z" level=error msg="Failed to destroy network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.361859 containerd[1430]: time="2025-08-13T00:11:41.361375247Z" level=error msg="encountered an error cleaning up failed sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.361859 containerd[1430]: time="2025-08-13T00:11:41.361592485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bf9ff8c6-ds8gn,Uid:73c192b0-5021-43fd-851e-5152f889105e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.366197 kubelet[2484]: E0813 00:11:41.366126 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.366368 kubelet[2484]: E0813 00:11:41.366226 2484 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-ds8gn" Aug 13 00:11:41.366368 kubelet[2484]: E0813 00:11:41.366250 2484 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-ds8gn" Aug 13 00:11:41.366368 kubelet[2484]: E0813 00:11:41.366311 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69bf9ff8c6-ds8gn_calico-apiserver(73c192b0-5021-43fd-851e-5152f889105e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69bf9ff8c6-ds8gn_calico-apiserver(73c192b0-5021-43fd-851e-5152f889105e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-ds8gn" podUID="73c192b0-5021-43fd-851e-5152f889105e" Aug 13 00:11:41.375143 containerd[1430]: time="2025-08-13T00:11:41.375092055Z" level=error msg="Failed to destroy network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.376314 containerd[1430]: time="2025-08-13T00:11:41.375698282Z" level=error msg="encountered an error cleaning up failed sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.376510 containerd[1430]: time="2025-08-13T00:11:41.376482700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754c986cf8-jmd7h,Uid:975cc5f0-0666-4e02-aeb2-c4aaa10bc520,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.377572 kubelet[2484]: E0813 00:11:41.377156 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.377572 kubelet[2484]: E0813 00:11:41.377220 2484 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754c986cf8-jmd7h" Aug 13 00:11:41.377572 kubelet[2484]: E0813 00:11:41.377242 2484 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754c986cf8-jmd7h" Aug 13 00:11:41.377878 kubelet[2484]: E0813 00:11:41.377291 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-754c986cf8-jmd7h_calico-system(975cc5f0-0666-4e02-aeb2-c4aaa10bc520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-754c986cf8-jmd7h_calico-system(975cc5f0-0666-4e02-aeb2-c4aaa10bc520)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754c986cf8-jmd7h" podUID="975cc5f0-0666-4e02-aeb2-c4aaa10bc520" Aug 13 00:11:41.389863 containerd[1430]: time="2025-08-13T00:11:41.389797957Z" level=error msg="Failed to destroy network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.390682 containerd[1430]: time="2025-08-13T00:11:41.390631824Z" level=error msg="Failed to destroy network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.390968 containerd[1430]: time="2025-08-13T00:11:41.390770488Z" level=error msg="Failed to destroy network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.391448 containerd[1430]: time="2025-08-13T00:11:41.391406880Z" level=error msg="encountered an error cleaning up failed sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.391521 containerd[1430]: time="2025-08-13T00:11:41.391472531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tg9t5,Uid:fbdf544e-e157-4095-9b30-e5d9130445c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.391765 kubelet[2484]: E0813 00:11:41.391714 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.391866 kubelet[2484]: E0813 00:11:41.391786 2484 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-tg9t5" Aug 13 00:11:41.391866 kubelet[2484]: E0813 00:11:41.391808 2484 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-tg9t5" Aug 13 00:11:41.392073 kubelet[2484]: E0813 00:11:41.391863 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-tg9t5_calico-system(fbdf544e-e157-4095-9b30-e5d9130445c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-tg9t5_calico-system(fbdf544e-e157-4095-9b30-e5d9130445c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-tg9t5" podUID="fbdf544e-e157-4095-9b30-e5d9130445c2" Aug 13 00:11:41.392272 containerd[1430]: time="2025-08-13T00:11:41.392218542Z" level=error msg="encountered an error cleaning up failed sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.392548 containerd[1430]: time="2025-08-13T00:11:41.392376770Z" level=error msg="encountered an error cleaning up failed sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.392626 containerd[1430]: time="2025-08-13T00:11:41.392565923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54446d6f8c-zdlwk,Uid:94cfa85d-0b82-444c-ba96-be8ce7895a84,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.392881 containerd[1430]: time="2025-08-13T00:11:41.392522836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2bhn,Uid:9b6c872d-33f8-4452-b725-41047a59fd6c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.393060 kubelet[2484]: E0813 00:11:41.392799 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.393060 kubelet[2484]: E0813 00:11:41.392806 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.393060 kubelet[2484]: E0813 00:11:41.392956 2484 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f2bhn" Aug 13 00:11:41.393060 kubelet[2484]: E0813 00:11:41.392981 2484 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f2bhn" Aug 13 00:11:41.393289 kubelet[2484]: E0813 00:11:41.393039 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-f2bhn_kube-system(9b6c872d-33f8-4452-b725-41047a59fd6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-f2bhn_kube-system(9b6c872d-33f8-4452-b725-41047a59fd6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f2bhn" podUID="9b6c872d-33f8-4452-b725-41047a59fd6c" Aug 13 00:11:41.393289 kubelet[2484]: E0813 00:11:41.392884 2484 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54446d6f8c-zdlwk" Aug 13 00:11:41.393289 kubelet[2484]: E0813 00:11:41.393119 2484 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54446d6f8c-zdlwk" Aug 13 00:11:41.393610 kubelet[2484]: E0813 00:11:41.393157 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54446d6f8c-zdlwk_calico-system(94cfa85d-0b82-444c-ba96-be8ce7895a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54446d6f8c-zdlwk_calico-system(94cfa85d-0b82-444c-ba96-be8ce7895a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54446d6f8c-zdlwk" podUID="94cfa85d-0b82-444c-ba96-be8ce7895a84" Aug 13 00:11:41.425766 containerd[1430]: time="2025-08-13T00:11:41.425695900Z" level=error msg="Failed to destroy network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.426492 containerd[1430]: time="2025-08-13T00:11:41.426361697Z" level=error msg="encountered an error cleaning up failed sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.426492 containerd[1430]: time="2025-08-13T00:11:41.426432669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bf9ff8c6-mp76l,Uid:559a4267-56cf-459b-a0e0-15a1cc2cb395,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.427025 kubelet[2484]: E0813 00:11:41.426908 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.427198 kubelet[2484]: E0813 00:11:41.426999 2484 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-mp76l" Aug 13 00:11:41.427198 kubelet[2484]: E0813 00:11:41.427128 2484 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-mp76l" Aug 13 00:11:41.427388 kubelet[2484]: E0813 00:11:41.427292 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69bf9ff8c6-mp76l_calico-apiserver(559a4267-56cf-459b-a0e0-15a1cc2cb395)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69bf9ff8c6-mp76l_calico-apiserver(559a4267-56cf-459b-a0e0-15a1cc2cb395)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-mp76l" podUID="559a4267-56cf-459b-a0e0-15a1cc2cb395" Aug 13 00:11:41.428444 containerd[1430]: time="2025-08-13T00:11:41.427891365Z" level=error msg="Failed to destroy network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.428444 containerd[1430]: time="2025-08-13T00:11:41.428199219Z" level=error msg="encountered an error cleaning up failed sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.428444 containerd[1430]: time="2025-08-13T00:11:41.428242947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lh4nv,Uid:c774b47c-e08c-42ad-b562-dd791cc0ed35,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.428574 kubelet[2484]: E0813 00:11:41.428466 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.428574 kubelet[2484]: E0813 00:11:41.428555 2484 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lh4nv" Aug 13 00:11:41.428624 kubelet[2484]: E0813 00:11:41.428577 2484 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lh4nv" Aug 13 00:11:41.428648 kubelet[2484]: E0813 00:11:41.428626 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lh4nv_kube-system(c774b47c-e08c-42ad-b562-dd791cc0ed35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lh4nv_kube-system(c774b47c-e08c-42ad-b562-dd791cc0ed35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lh4nv" podUID="c774b47c-e08c-42ad-b562-dd791cc0ed35" Aug 13 00:11:41.682477 kubelet[2484]: I0813 00:11:41.682446 2484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:11:41.684870 containerd[1430]: time="2025-08-13T00:11:41.683129937Z" level=info msg="StopPodSandbox for \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\"" Aug 13 00:11:41.684870 containerd[1430]: time="2025-08-13T00:11:41.683325171Z" level=info msg="Ensure that sandbox 7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3 in task-service has been cleanup successfully" Aug 13 00:11:41.685039 kubelet[2484]: I0813 00:11:41.684252 2484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:11:41.685867 containerd[1430]: time="2025-08-13T00:11:41.685325042Z" level=info msg="StopPodSandbox for \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\"" Aug 13 00:11:41.686711 kubelet[2484]: I0813 00:11:41.686590 2484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:11:41.688884 kubelet[2484]: I0813 00:11:41.688306 2484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:11:41.691036 kubelet[2484]: I0813 00:11:41.690935 2484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:11:41.693877 kubelet[2484]: I0813 00:11:41.693850 2484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:11:41.694977 containerd[1430]: time="2025-08-13T00:11:41.686287331Z" level=info msg="Ensure that sandbox ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094 in task-service has been cleanup successfully" Aug 13 00:11:41.694977 containerd[1430]: time="2025-08-13T00:11:41.687396766Z" level=info msg="StopPodSandbox for \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\"" Aug 13 00:11:41.694977 containerd[1430]: time="2025-08-13T00:11:41.688879186Z" level=info msg="StopPodSandbox for \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\"" Aug 13 00:11:41.694977 containerd[1430]: time="2025-08-13T00:11:41.694978777Z" level=info msg="Ensure that sandbox a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a in task-service has been cleanup successfully" Aug 13 00:11:41.695420 containerd[1430]: time="2025-08-13T00:11:41.695196015Z" level=info msg="Ensure that sandbox f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d in task-service has been cleanup successfully" Aug 13 00:11:41.695572 containerd[1430]: time="2025-08-13T00:11:41.695536875Z" level=info msg="StopPodSandbox for \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\"" Aug 13 00:11:41.696396 containerd[1430]: time="2025-08-13T00:11:41.696041004Z" level=info msg="Ensure that sandbox 5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960 in task-service has been cleanup successfully" Aug 13 00:11:41.696475 containerd[1430]: time="2025-08-13T00:11:41.691920880Z" level=info msg="StopPodSandbox for \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\"" Aug 13 00:11:41.696955 containerd[1430]: time="2025-08-13T00:11:41.696860587Z" level=info msg="Ensure that sandbox d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe in task-service has been cleanup successfully" Aug 13 00:11:41.697818 kubelet[2484]: I0813 00:11:41.697646 2484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:11:41.698494 containerd[1430]: time="2025-08-13T00:11:41.698465149Z" level=info msg="StopPodSandbox for \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\"" Aug 13 00:11:41.698925 containerd[1430]: time="2025-08-13T00:11:41.698826213Z" level=info msg="Ensure that sandbox 98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a in task-service has been cleanup successfully" Aug 13 00:11:41.750150 containerd[1430]: time="2025-08-13T00:11:41.750087412Z" level=error msg="StopPodSandbox for \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\" failed" error="failed to destroy network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.750420 kubelet[2484]: E0813 00:11:41.750380 2484 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:11:41.754054 containerd[1430]: time="2025-08-13T00:11:41.753994858Z" level=error msg="StopPodSandbox for \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\" failed" error="failed to destroy network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.754500 kubelet[2484]: E0813 00:11:41.754189 2484 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a"} Aug 13 00:11:41.754500 kubelet[2484]: E0813 00:11:41.754298 2484 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c774b47c-e08c-42ad-b562-dd791cc0ed35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:11:41.754500 kubelet[2484]: E0813 00:11:41.754320 2484 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:11:41.754500 kubelet[2484]: E0813 00:11:41.754330 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c774b47c-e08c-42ad-b562-dd791cc0ed35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lh4nv" podUID="c774b47c-e08c-42ad-b562-dd791cc0ed35" Aug 13 00:11:41.754500 kubelet[2484]: E0813 00:11:41.754375 2484 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d"} Aug 13 00:11:41.754728 kubelet[2484]: E0813 00:11:41.754404 2484 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73c192b0-5021-43fd-851e-5152f889105e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:11:41.754728 kubelet[2484]: E0813 00:11:41.754427 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73c192b0-5021-43fd-851e-5152f889105e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-ds8gn" podUID="73c192b0-5021-43fd-851e-5152f889105e" Aug 13 00:11:41.757837 containerd[1430]: time="2025-08-13T00:11:41.757105284Z" level=error msg="StopPodSandbox for \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\" failed" error="failed to destroy network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.757995 kubelet[2484]: E0813 00:11:41.757392 2484 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:11:41.757995 kubelet[2484]: E0813 00:11:41.757468 2484 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3"} Aug 13 00:11:41.757995 kubelet[2484]: E0813 00:11:41.757509 2484 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"559a4267-56cf-459b-a0e0-15a1cc2cb395\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:11:41.757995 kubelet[2484]: E0813 00:11:41.757539 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"559a4267-56cf-459b-a0e0-15a1cc2cb395\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-mp76l" podUID="559a4267-56cf-459b-a0e0-15a1cc2cb395" Aug 13 00:11:41.764021 containerd[1430]: time="2025-08-13T00:11:41.763864671Z" level=error msg="StopPodSandbox for \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\" failed" error="failed to destroy network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.764439 kubelet[2484]: E0813 00:11:41.764158 2484 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:11:41.764439 kubelet[2484]: E0813 00:11:41.764216 2484 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe"} Aug 13 00:11:41.764439 kubelet[2484]: E0813 00:11:41.764248 2484 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94cfa85d-0b82-444c-ba96-be8ce7895a84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:11:41.764439 kubelet[2484]: E0813 00:11:41.764270 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94cfa85d-0b82-444c-ba96-be8ce7895a84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54446d6f8c-zdlwk" podUID="94cfa85d-0b82-444c-ba96-be8ce7895a84" Aug 13 00:11:41.771131 containerd[1430]: time="2025-08-13T00:11:41.770877902Z" level=error msg="StopPodSandbox for \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\" failed" error="failed to destroy network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.771131 containerd[1430]: time="2025-08-13T00:11:41.771064615Z" level=error msg="StopPodSandbox for \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\" failed" error="failed to destroy network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.771318 kubelet[2484]: E0813 00:11:41.771273 2484 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:11:41.771560 kubelet[2484]: E0813 00:11:41.771324 2484 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960"} Aug 13 00:11:41.771560 kubelet[2484]: E0813 00:11:41.771390 2484 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:11:41.771560 kubelet[2484]: E0813 00:11:41.771415 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754c986cf8-jmd7h" podUID="975cc5f0-0666-4e02-aeb2-c4aaa10bc520" Aug 13 00:11:41.771560 kubelet[2484]: E0813 00:11:41.771481 2484 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:11:41.771560 kubelet[2484]: E0813 00:11:41.771500 2484 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a"} Aug 13 00:11:41.771791 kubelet[2484]: E0813 00:11:41.771518 2484 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b6c872d-33f8-4452-b725-41047a59fd6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:11:41.771791 kubelet[2484]: E0813 00:11:41.771543 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b6c872d-33f8-4452-b725-41047a59fd6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f2bhn" podUID="9b6c872d-33f8-4452-b725-41047a59fd6c" Aug 13 00:11:41.775897 containerd[1430]: time="2025-08-13T00:11:41.775841774Z" level=error msg="StopPodSandbox for \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\" failed" error="failed to destroy network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:41.776303 kubelet[2484]: E0813 00:11:41.776092 2484 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:11:41.776303 kubelet[2484]: E0813 00:11:41.776140 2484 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094"} Aug 13 00:11:41.776303 kubelet[2484]: E0813 00:11:41.776173 2484 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbdf544e-e157-4095-9b30-e5d9130445c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:11:41.776303 kubelet[2484]: E0813 00:11:41.776196 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbdf544e-e157-4095-9b30-e5d9130445c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-tg9t5" podUID="fbdf544e-e157-4095-9b30-e5d9130445c2" Aug 13 00:11:42.561871 systemd[1]: Created slice kubepods-besteffort-pode81ec000_f2d6_44b6_854d_59a730f62e7e.slice - libcontainer container kubepods-besteffort-pode81ec000_f2d6_44b6_854d_59a730f62e7e.slice. Aug 13 00:11:42.568261 containerd[1430]: time="2025-08-13T00:11:42.568016005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nqv9j,Uid:e81ec000-f2d6-44b6-854d-59a730f62e7e,Namespace:calico-system,Attempt:0,}" Aug 13 00:11:42.680881 containerd[1430]: time="2025-08-13T00:11:42.680824348Z" level=error msg="Failed to destroy network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:42.682174 containerd[1430]: time="2025-08-13T00:11:42.682065798Z" level=error msg="encountered an error cleaning up failed sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:42.682174 containerd[1430]: time="2025-08-13T00:11:42.682153613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nqv9j,Uid:e81ec000-f2d6-44b6-854d-59a730f62e7e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:42.682481 kubelet[2484]: E0813 00:11:42.682401 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:42.682481 kubelet[2484]: E0813 00:11:42.682463 2484 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nqv9j" Aug 13 00:11:42.684764 kubelet[2484]: E0813 00:11:42.682482 2484 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nqv9j" Aug 13 00:11:42.684764 kubelet[2484]: E0813 00:11:42.682532 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nqv9j_calico-system(e81ec000-f2d6-44b6-854d-59a730f62e7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nqv9j_calico-system(e81ec000-f2d6-44b6-854d-59a730f62e7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nqv9j" podUID="e81ec000-f2d6-44b6-854d-59a730f62e7e" Aug 13 00:11:42.684067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd-shm.mount: Deactivated successfully. Aug 13 00:11:42.699854 kubelet[2484]: I0813 00:11:42.699823 2484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:11:42.700551 containerd[1430]: time="2025-08-13T00:11:42.700465914Z" level=info msg="StopPodSandbox for \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\"" Aug 13 00:11:42.700684 containerd[1430]: time="2025-08-13T00:11:42.700661827Z" level=info msg="Ensure that sandbox 1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd in task-service has been cleanup successfully" Aug 13 00:11:42.744477 containerd[1430]: time="2025-08-13T00:11:42.744401314Z" level=error msg="StopPodSandbox for \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\" failed" error="failed to destroy network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:11:42.744693 kubelet[2484]: E0813 00:11:42.744654 2484 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:11:42.744746 kubelet[2484]: E0813 00:11:42.744714 2484 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd"} Aug 13 00:11:42.744775 kubelet[2484]: E0813 00:11:42.744752 2484 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e81ec000-f2d6-44b6-854d-59a730f62e7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:11:42.744836 kubelet[2484]: E0813 00:11:42.744774 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e81ec000-f2d6-44b6-854d-59a730f62e7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nqv9j" podUID="e81ec000-f2d6-44b6-854d-59a730f62e7e" Aug 13 00:11:44.955841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985686919.mount: Deactivated successfully. Aug 13 00:11:45.221051 containerd[1430]: time="2025-08-13T00:11:45.220918524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:45.222111 containerd[1430]: time="2025-08-13T00:11:45.221902314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:11:45.223477 containerd[1430]: time="2025-08-13T00:11:45.223411945Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:45.225647 containerd[1430]: time="2025-08-13T00:11:45.225603960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:45.227139 containerd[1430]: time="2025-08-13T00:11:45.226635638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.54402263s" Aug 13 00:11:45.227139 containerd[1430]: time="2025-08-13T00:11:45.226678284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:11:45.241462 containerd[1430]: time="2025-08-13T00:11:45.241258714Z" level=info msg="CreateContainer within sandbox \"d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:11:45.298513 containerd[1430]: time="2025-08-13T00:11:45.298451138Z" level=info msg="CreateContainer within sandbox \"d7c75f9356477f350034bbcce50379a2fe823d075077d08a3ca7c85bfc561434\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ade51fe551373fa981f75b03189f4e0e2f65646a42b0dcc5faa59704ec09062b\"" Aug 13 00:11:45.299108 containerd[1430]: time="2025-08-13T00:11:45.299081274Z" level=info msg="StartContainer for \"ade51fe551373fa981f75b03189f4e0e2f65646a42b0dcc5faa59704ec09062b\"" Aug 13 00:11:45.353564 systemd[1]: Started cri-containerd-ade51fe551373fa981f75b03189f4e0e2f65646a42b0dcc5faa59704ec09062b.scope - libcontainer container ade51fe551373fa981f75b03189f4e0e2f65646a42b0dcc5faa59704ec09062b. Aug 13 00:11:45.394488 containerd[1430]: time="2025-08-13T00:11:45.394443695Z" level=info msg="StartContainer for \"ade51fe551373fa981f75b03189f4e0e2f65646a42b0dcc5faa59704ec09062b\" returns successfully" Aug 13 00:11:45.620859 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:11:45.620982 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:11:45.741302 kubelet[2484]: I0813 00:11:45.741215 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hd7fs" podStartSLOduration=1.25647485 podStartE2EDuration="13.741198432s" podCreationTimestamp="2025-08-13 00:11:32 +0000 UTC" firstStartedPulling="2025-08-13 00:11:32.742629526 +0000 UTC m=+20.281394536" lastFinishedPulling="2025-08-13 00:11:45.227353068 +0000 UTC m=+32.766118118" observedRunningTime="2025-08-13 00:11:45.736550081 +0000 UTC m=+33.275315171" watchObservedRunningTime="2025-08-13 00:11:45.741198432 +0000 UTC m=+33.279963482" Aug 13 00:11:45.743588 containerd[1430]: time="2025-08-13T00:11:45.743242104Z" level=info msg="StopPodSandbox for \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\"" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:45.923 [INFO][3827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:45.924 [INFO][3827] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" iface="eth0" netns="/var/run/netns/cni-e2d52a62-657e-4f74-2a7f-af517c211275" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:45.924 [INFO][3827] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" iface="eth0" netns="/var/run/netns/cni-e2d52a62-657e-4f74-2a7f-af517c211275" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:45.925 [INFO][3827] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" iface="eth0" netns="/var/run/netns/cni-e2d52a62-657e-4f74-2a7f-af517c211275" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:45.925 [INFO][3827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:45.925 [INFO][3827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:46.092 [INFO][3838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:46.092 [INFO][3838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:46.093 [INFO][3838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:46.107 [WARNING][3838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:46.107 [INFO][3838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:46.110 [INFO][3838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:46.114483 containerd[1430]: 2025-08-13 00:11:46.112 [INFO][3827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:11:46.117040 systemd[1]: run-netns-cni\x2de2d52a62\x2d657e\x2d4f74\x2d2a7f\x2daf517c211275.mount: Deactivated successfully. Aug 13 00:11:46.117718 containerd[1430]: time="2025-08-13T00:11:46.117544849Z" level=info msg="TearDown network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\" successfully" Aug 13 00:11:46.117718 containerd[1430]: time="2025-08-13T00:11:46.117582095Z" level=info msg="StopPodSandbox for \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\" returns successfully" Aug 13 00:11:46.295635 kubelet[2484]: I0813 00:11:46.295590 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-whisker-backend-key-pair\") pod \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\" (UID: \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\") " Aug 13 00:11:46.295635 kubelet[2484]: I0813 00:11:46.295642 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-whisker-ca-bundle\") pod \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\" (UID: \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\") " Aug 13 00:11:46.295812 kubelet[2484]: I0813 00:11:46.295667 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlgwv\" (UniqueName: \"kubernetes.io/projected/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-kube-api-access-nlgwv\") pod \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\" (UID: \"975cc5f0-0666-4e02-aeb2-c4aaa10bc520\") " Aug 13 00:11:46.309850 systemd[1]: var-lib-kubelet-pods-975cc5f0\x2d0666\x2d4e02\x2daeb2\x2dc4aaa10bc520-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnlgwv.mount: Deactivated successfully. Aug 13 00:11:46.310107 systemd[1]: var-lib-kubelet-pods-975cc5f0\x2d0666\x2d4e02\x2daeb2\x2dc4aaa10bc520-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:11:46.310445 kubelet[2484]: I0813 00:11:46.310209 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-kube-api-access-nlgwv" (OuterVolumeSpecName: "kube-api-access-nlgwv") pod "975cc5f0-0666-4e02-aeb2-c4aaa10bc520" (UID: "975cc5f0-0666-4e02-aeb2-c4aaa10bc520"). InnerVolumeSpecName "kube-api-access-nlgwv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:11:46.314773 kubelet[2484]: I0813 00:11:46.311234 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "975cc5f0-0666-4e02-aeb2-c4aaa10bc520" (UID: "975cc5f0-0666-4e02-aeb2-c4aaa10bc520"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:11:46.314773 kubelet[2484]: I0813 00:11:46.311532 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "975cc5f0-0666-4e02-aeb2-c4aaa10bc520" (UID: "975cc5f0-0666-4e02-aeb2-c4aaa10bc520"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:11:46.396079 kubelet[2484]: I0813 00:11:46.395952 2484 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 00:11:46.396079 kubelet[2484]: I0813 00:11:46.395986 2484 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 00:11:46.396079 kubelet[2484]: I0813 00:11:46.395995 2484 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nlgwv\" (UniqueName: \"kubernetes.io/projected/975cc5f0-0666-4e02-aeb2-c4aaa10bc520-kube-api-access-nlgwv\") on node \"localhost\" DevicePath \"\"" Aug 13 00:11:46.564065 systemd[1]: Removed slice kubepods-besteffort-pod975cc5f0_0666_4e02_aeb2_c4aaa10bc520.slice - libcontainer container kubepods-besteffort-pod975cc5f0_0666_4e02_aeb2_c4aaa10bc520.slice. Aug 13 00:11:46.723330 kubelet[2484]: I0813 00:11:46.723201 2484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:11:46.810691 systemd[1]: Created slice kubepods-besteffort-podacbb102c_804f_498e_8cca_04f9f2f35683.slice - libcontainer container kubepods-besteffort-podacbb102c_804f_498e_8cca_04f9f2f35683.slice. Aug 13 00:11:46.898633 kubelet[2484]: I0813 00:11:46.898564 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6jx\" (UniqueName: \"kubernetes.io/projected/acbb102c-804f-498e-8cca-04f9f2f35683-kube-api-access-lc6jx\") pod \"whisker-59cff6f678-g84hs\" (UID: \"acbb102c-804f-498e-8cca-04f9f2f35683\") " pod="calico-system/whisker-59cff6f678-g84hs" Aug 13 00:11:46.898633 kubelet[2484]: I0813 00:11:46.898642 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acbb102c-804f-498e-8cca-04f9f2f35683-whisker-ca-bundle\") pod \"whisker-59cff6f678-g84hs\" (UID: \"acbb102c-804f-498e-8cca-04f9f2f35683\") " pod="calico-system/whisker-59cff6f678-g84hs" Aug 13 00:11:46.899105 kubelet[2484]: I0813 00:11:46.898702 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/acbb102c-804f-498e-8cca-04f9f2f35683-whisker-backend-key-pair\") pod \"whisker-59cff6f678-g84hs\" (UID: \"acbb102c-804f-498e-8cca-04f9f2f35683\") " pod="calico-system/whisker-59cff6f678-g84hs" Aug 13 00:11:47.114739 containerd[1430]: time="2025-08-13T00:11:47.114603182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cff6f678-g84hs,Uid:acbb102c-804f-498e-8cca-04f9f2f35683,Namespace:calico-system,Attempt:0,}" Aug 13 00:11:47.327888 systemd-networkd[1374]: cali2b25788d694: Link UP Aug 13 00:11:47.328094 systemd-networkd[1374]: cali2b25788d694: Gained carrier Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.177 [INFO][3866] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.209 [INFO][3866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59cff6f678--g84hs-eth0 whisker-59cff6f678- calico-system acbb102c-804f-498e-8cca-04f9f2f35683 974 0 2025-08-13 00:11:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59cff6f678 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59cff6f678-g84hs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2b25788d694 [] [] }} ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Namespace="calico-system" Pod="whisker-59cff6f678-g84hs" WorkloadEndpoint="localhost-k8s-whisker--59cff6f678--g84hs-" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.209 [INFO][3866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Namespace="calico-system" Pod="whisker-59cff6f678-g84hs" WorkloadEndpoint="localhost-k8s-whisker--59cff6f678--g84hs-eth0" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.247 [INFO][3964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" HandleID="k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Workload="localhost-k8s-whisker--59cff6f678--g84hs-eth0" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.247 [INFO][3964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" HandleID="k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Workload="localhost-k8s-whisker--59cff6f678--g84hs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000198510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59cff6f678-g84hs", "timestamp":"2025-08-13 00:11:47.247012552 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.247 [INFO][3964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.247 [INFO][3964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.247 [INFO][3964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.265 [INFO][3964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.273 [INFO][3964] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.280 [INFO][3964] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.284 [INFO][3964] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.287 [INFO][3964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.287 [INFO][3964] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.289 [INFO][3964] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19 Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.302 [INFO][3964] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.312 [INFO][3964] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.312 [INFO][3964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" host="localhost" Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.312 [INFO][3964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:47.353065 containerd[1430]: 2025-08-13 00:11:47.313 [INFO][3964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" HandleID="k8s-pod-network.2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Workload="localhost-k8s-whisker--59cff6f678--g84hs-eth0" Aug 13 00:11:47.353704 containerd[1430]: 2025-08-13 00:11:47.315 [INFO][3866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Namespace="calico-system" Pod="whisker-59cff6f678-g84hs" WorkloadEndpoint="localhost-k8s-whisker--59cff6f678--g84hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59cff6f678--g84hs-eth0", GenerateName:"whisker-59cff6f678-", Namespace:"calico-system", SelfLink:"", UID:"acbb102c-804f-498e-8cca-04f9f2f35683", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cff6f678", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59cff6f678-g84hs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2b25788d694", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:47.353704 containerd[1430]: 2025-08-13 00:11:47.315 [INFO][3866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Namespace="calico-system" Pod="whisker-59cff6f678-g84hs" WorkloadEndpoint="localhost-k8s-whisker--59cff6f678--g84hs-eth0" Aug 13 00:11:47.353704 containerd[1430]: 2025-08-13 00:11:47.315 [INFO][3866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b25788d694 ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Namespace="calico-system" Pod="whisker-59cff6f678-g84hs" WorkloadEndpoint="localhost-k8s-whisker--59cff6f678--g84hs-eth0" Aug 13 00:11:47.353704 containerd[1430]: 2025-08-13 00:11:47.330 [INFO][3866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Namespace="calico-system" Pod="whisker-59cff6f678-g84hs" WorkloadEndpoint="localhost-k8s-whisker--59cff6f678--g84hs-eth0" Aug 13 00:11:47.353704 containerd[1430]: 2025-08-13 00:11:47.331 [INFO][3866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Namespace="calico-system" Pod="whisker-59cff6f678-g84hs" WorkloadEndpoint="localhost-k8s-whisker--59cff6f678--g84hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59cff6f678--g84hs-eth0", GenerateName:"whisker-59cff6f678-", Namespace:"calico-system", SelfLink:"", UID:"acbb102c-804f-498e-8cca-04f9f2f35683", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cff6f678", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19", Pod:"whisker-59cff6f678-g84hs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2b25788d694", MAC:"8a:10:d1:a9:f7:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:47.353704 containerd[1430]: 2025-08-13 00:11:47.347 [INFO][3866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19" Namespace="calico-system" Pod="whisker-59cff6f678-g84hs" WorkloadEndpoint="localhost-k8s-whisker--59cff6f678--g84hs-eth0" Aug 13 00:11:47.415508 containerd[1430]: time="2025-08-13T00:11:47.381459415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:47.415508 containerd[1430]: time="2025-08-13T00:11:47.415168895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:47.415508 containerd[1430]: time="2025-08-13T00:11:47.415195699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:47.415508 containerd[1430]: time="2025-08-13T00:11:47.415331118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:47.444710 systemd[1]: Started cri-containerd-2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19.scope - libcontainer container 2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19. Aug 13 00:11:47.473545 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:11:47.493398 kernel: bpftool[4053]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:11:47.502181 containerd[1430]: time="2025-08-13T00:11:47.502115938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cff6f678-g84hs,Uid:acbb102c-804f-498e-8cca-04f9f2f35683,Namespace:calico-system,Attempt:0,} returns sandbox id \"2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19\"" Aug 13 00:11:47.504693 containerd[1430]: time="2025-08-13T00:11:47.504657623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:11:47.675082 systemd-networkd[1374]: vxlan.calico: Link UP Aug 13 00:11:47.675092 systemd-networkd[1374]: vxlan.calico: Gained carrier Aug 13 00:11:48.556921 kubelet[2484]: I0813 00:11:48.556718 2484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975cc5f0-0666-4e02-aeb2-c4aaa10bc520" path="/var/lib/kubelet/pods/975cc5f0-0666-4e02-aeb2-c4aaa10bc520/volumes" Aug 13 00:11:48.784716 systemd-networkd[1374]: cali2b25788d694: Gained IPv6LL Aug 13 00:11:48.850266 containerd[1430]: time="2025-08-13T00:11:48.849428670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:48.850266 containerd[1430]: time="2025-08-13T00:11:48.850224140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:11:48.851443 containerd[1430]: time="2025-08-13T00:11:48.851406865Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:48.854999 containerd[1430]: time="2025-08-13T00:11:48.854951119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.35025293s" Aug 13 00:11:48.854999 containerd[1430]: time="2025-08-13T00:11:48.854996205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:11:48.858937 containerd[1430]: time="2025-08-13T00:11:48.858732326Z" level=info msg="CreateContainer within sandbox \"2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:11:48.873113 containerd[1430]: time="2025-08-13T00:11:48.873045720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:48.879877 containerd[1430]: time="2025-08-13T00:11:48.879829426Z" level=info msg="CreateContainer within sandbox \"2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6d721b9f5f3d4109dea655dbe527a759e80f66876eeb86e7792a8ea41758d4fc\"" Aug 13 00:11:48.880685 containerd[1430]: time="2025-08-13T00:11:48.880472515Z" level=info msg="StartContainer for \"6d721b9f5f3d4109dea655dbe527a759e80f66876eeb86e7792a8ea41758d4fc\"" Aug 13 00:11:48.912567 systemd[1]: Started cri-containerd-6d721b9f5f3d4109dea655dbe527a759e80f66876eeb86e7792a8ea41758d4fc.scope - libcontainer container 6d721b9f5f3d4109dea655dbe527a759e80f66876eeb86e7792a8ea41758d4fc. Aug 13 00:11:48.961896 containerd[1430]: time="2025-08-13T00:11:48.961846934Z" level=info msg="StartContainer for \"6d721b9f5f3d4109dea655dbe527a759e80f66876eeb86e7792a8ea41758d4fc\" returns successfully" Aug 13 00:11:48.965660 containerd[1430]: time="2025-08-13T00:11:48.965064103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:11:49.487221 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Aug 13 00:11:50.592165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935828945.mount: Deactivated successfully. Aug 13 00:11:50.611701 containerd[1430]: time="2025-08-13T00:11:50.611646455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:50.613387 containerd[1430]: time="2025-08-13T00:11:50.613336118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:11:50.614453 containerd[1430]: time="2025-08-13T00:11:50.614423341Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:50.616689 containerd[1430]: time="2025-08-13T00:11:50.616653234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:50.618378 containerd[1430]: time="2025-08-13T00:11:50.617938364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.652833535s" Aug 13 00:11:50.618378 containerd[1430]: time="2025-08-13T00:11:50.617976969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:11:50.621992 containerd[1430]: time="2025-08-13T00:11:50.621960053Z" level=info msg="CreateContainer within sandbox \"2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:11:50.658051 containerd[1430]: time="2025-08-13T00:11:50.657992637Z" level=info msg="CreateContainer within sandbox \"2692fd6868c03802c8c42e269a1dd3e52786e8560694fffc87cc7cb3f840fc19\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9d6e619640c7d8a6fd239970345c5491736973e54e4f9e82fdce12c1e5b3c125\"" Aug 13 00:11:50.658776 containerd[1430]: time="2025-08-13T00:11:50.658709012Z" level=info msg="StartContainer for \"9d6e619640c7d8a6fd239970345c5491736973e54e4f9e82fdce12c1e5b3c125\"" Aug 13 00:11:50.703635 systemd[1]: Started cri-containerd-9d6e619640c7d8a6fd239970345c5491736973e54e4f9e82fdce12c1e5b3c125.scope - libcontainer container 9d6e619640c7d8a6fd239970345c5491736973e54e4f9e82fdce12c1e5b3c125. Aug 13 00:11:50.748536 containerd[1430]: time="2025-08-13T00:11:50.748487912Z" level=info msg="StartContainer for \"9d6e619640c7d8a6fd239970345c5491736973e54e4f9e82fdce12c1e5b3c125\" returns successfully" Aug 13 00:11:51.747747 kubelet[2484]: I0813 00:11:51.747503 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59cff6f678-g84hs" podStartSLOduration=2.632661293 podStartE2EDuration="5.747488404s" podCreationTimestamp="2025-08-13 00:11:46 +0000 UTC" firstStartedPulling="2025-08-13 00:11:47.504073539 +0000 UTC m=+35.042838589" lastFinishedPulling="2025-08-13 00:11:50.61890065 +0000 UTC m=+38.157665700" observedRunningTime="2025-08-13 00:11:51.747056149 +0000 UTC m=+39.285821199" watchObservedRunningTime="2025-08-13 00:11:51.747488404 +0000 UTC m=+39.286253454" Aug 13 00:11:52.554677 containerd[1430]: time="2025-08-13T00:11:52.554608412Z" level=info msg="StopPodSandbox for \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\"" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.621 [INFO][4240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.622 [INFO][4240] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" iface="eth0" netns="/var/run/netns/cni-61f45a2c-c08f-168a-3e36-9143692f61ee" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.622 [INFO][4240] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" iface="eth0" netns="/var/run/netns/cni-61f45a2c-c08f-168a-3e36-9143692f61ee" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.622 [INFO][4240] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" iface="eth0" netns="/var/run/netns/cni-61f45a2c-c08f-168a-3e36-9143692f61ee" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.622 [INFO][4240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.622 [INFO][4240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.645 [INFO][4249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.645 [INFO][4249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.645 [INFO][4249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.659 [WARNING][4249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.659 [INFO][4249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.670 [INFO][4249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:52.674413 containerd[1430]: 2025-08-13 00:11:52.672 [INFO][4240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:11:52.675293 containerd[1430]: time="2025-08-13T00:11:52.675151230Z" level=info msg="TearDown network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\" successfully" Aug 13 00:11:52.675293 containerd[1430]: time="2025-08-13T00:11:52.675183154Z" level=info msg="StopPodSandbox for \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\" returns successfully" Aug 13 00:11:52.677639 systemd[1]: run-netns-cni\x2d61f45a2c\x2dc08f\x2d168a\x2d3e36\x2d9143692f61ee.mount: Deactivated successfully. Aug 13 00:11:52.678498 containerd[1430]: time="2025-08-13T00:11:52.678454482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54446d6f8c-zdlwk,Uid:94cfa85d-0b82-444c-ba96-be8ce7895a84,Namespace:calico-system,Attempt:1,}" Aug 13 00:11:52.872852 systemd-networkd[1374]: cali15162155475: Link UP Aug 13 00:11:52.873152 systemd-networkd[1374]: cali15162155475: Gained carrier Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.801 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0 calico-kube-controllers-54446d6f8c- calico-system 94cfa85d-0b82-444c-ba96-be8ce7895a84 1003 0 2025-08-13 00:11:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54446d6f8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54446d6f8c-zdlwk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali15162155475 [] [] }} ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Namespace="calico-system" Pod="calico-kube-controllers-54446d6f8c-zdlwk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.801 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Namespace="calico-system" Pod="calico-kube-controllers-54446d6f8c-zdlwk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.833 [INFO][4275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" HandleID="k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.834 [INFO][4275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" HandleID="k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54446d6f8c-zdlwk", "timestamp":"2025-08-13 00:11:52.833916461 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.835 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.835 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.835 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.844 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.849 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.854 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.856 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.858 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.858 [INFO][4275] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.859 [INFO][4275] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.864 [INFO][4275] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.869 [INFO][4275] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.869 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" host="localhost" Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.869 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:52.891534 containerd[1430]: 2025-08-13 00:11:52.869 [INFO][4275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" HandleID="k8s-pod-network.73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.892204 containerd[1430]: 2025-08-13 00:11:52.871 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Namespace="calico-system" Pod="calico-kube-controllers-54446d6f8c-zdlwk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0", GenerateName:"calico-kube-controllers-54446d6f8c-", Namespace:"calico-system", SelfLink:"", UID:"94cfa85d-0b82-444c-ba96-be8ce7895a84", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54446d6f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54446d6f8c-zdlwk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali15162155475", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:52.892204 containerd[1430]: 2025-08-13 00:11:52.871 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Namespace="calico-system" Pod="calico-kube-controllers-54446d6f8c-zdlwk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.892204 containerd[1430]: 2025-08-13 00:11:52.871 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15162155475 ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Namespace="calico-system" Pod="calico-kube-controllers-54446d6f8c-zdlwk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.892204 containerd[1430]: 2025-08-13 00:11:52.873 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Namespace="calico-system" Pod="calico-kube-controllers-54446d6f8c-zdlwk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.892204 containerd[1430]: 2025-08-13 00:11:52.874 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Namespace="calico-system" Pod="calico-kube-controllers-54446d6f8c-zdlwk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0", GenerateName:"calico-kube-controllers-54446d6f8c-", Namespace:"calico-system", SelfLink:"", UID:"94cfa85d-0b82-444c-ba96-be8ce7895a84", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54446d6f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e", Pod:"calico-kube-controllers-54446d6f8c-zdlwk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali15162155475", MAC:"36:30:4d:e5:d2:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:52.892204 containerd[1430]: 2025-08-13 00:11:52.888 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e" Namespace="calico-system" Pod="calico-kube-controllers-54446d6f8c-zdlwk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:11:52.912723 containerd[1430]: time="2025-08-13T00:11:52.909494982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:52.912723 containerd[1430]: time="2025-08-13T00:11:52.909594434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:52.912723 containerd[1430]: time="2025-08-13T00:11:52.909629039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:52.913820 containerd[1430]: time="2025-08-13T00:11:52.913701227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:52.932550 systemd[1]: Started cri-containerd-73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e.scope - libcontainer container 73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e. Aug 13 00:11:52.945215 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:11:52.969878 containerd[1430]: time="2025-08-13T00:11:52.969822158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54446d6f8c-zdlwk,Uid:94cfa85d-0b82-444c-ba96-be8ce7895a84,Namespace:calico-system,Attempt:1,} returns sandbox id \"73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e\"" Aug 13 00:11:52.971308 containerd[1430]: time="2025-08-13T00:11:52.971281580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:11:53.554672 containerd[1430]: time="2025-08-13T00:11:53.554436607Z" level=info msg="StopPodSandbox for \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\"" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.607 [INFO][4352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.607 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" iface="eth0" netns="/var/run/netns/cni-555f7d77-fce9-f6d0-eeb3-1ccbb53e7f81" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.607 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" iface="eth0" netns="/var/run/netns/cni-555f7d77-fce9-f6d0-eeb3-1ccbb53e7f81" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.608 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" iface="eth0" netns="/var/run/netns/cni-555f7d77-fce9-f6d0-eeb3-1ccbb53e7f81" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.608 [INFO][4352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.608 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.629 [INFO][4362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.629 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.629 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.639 [WARNING][4362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.640 [INFO][4362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.642 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:53.646143 containerd[1430]: 2025-08-13 00:11:53.644 [INFO][4352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:11:53.646919 containerd[1430]: time="2025-08-13T00:11:53.646300520Z" level=info msg="TearDown network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\" successfully" Aug 13 00:11:53.646919 containerd[1430]: time="2025-08-13T00:11:53.646328484Z" level=info msg="StopPodSandbox for \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\" returns successfully" Aug 13 00:11:53.646970 kubelet[2484]: E0813 00:11:53.646673 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:53.647987 containerd[1430]: time="2025-08-13T00:11:53.647515548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2bhn,Uid:9b6c872d-33f8-4452-b725-41047a59fd6c,Namespace:kube-system,Attempt:1,}" Aug 13 00:11:53.679801 systemd[1]: run-netns-cni\x2d555f7d77\x2dfce9\x2df6d0\x2deeb3\x2d1ccbb53e7f81.mount: Deactivated successfully. Aug 13 00:11:53.811859 systemd-networkd[1374]: cali1ac3fcf92d9: Link UP Aug 13 00:11:53.812825 systemd-networkd[1374]: cali1ac3fcf92d9: Gained carrier Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.707 [INFO][4371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0 coredns-674b8bbfcf- kube-system 9b6c872d-33f8-4452-b725-41047a59fd6c 1011 0 2025-08-13 00:11:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-f2bhn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ac3fcf92d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2bhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f2bhn-" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.707 [INFO][4371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2bhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.740 [INFO][4385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" HandleID="k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.740 [INFO][4385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" HandleID="k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-f2bhn", "timestamp":"2025-08-13 00:11:53.740389665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.740 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.740 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.740 [INFO][4385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.753 [INFO][4385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.763 [INFO][4385] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.771 [INFO][4385] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.779 [INFO][4385] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.784 [INFO][4385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.784 [INFO][4385] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.787 [INFO][4385] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05 Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.794 [INFO][4385] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.804 [INFO][4385] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.804 [INFO][4385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" host="localhost" Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.804 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:53.832026 containerd[1430]: 2025-08-13 00:11:53.804 [INFO][4385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" HandleID="k8s-pod-network.9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.832672 containerd[1430]: 2025-08-13 00:11:53.809 [INFO][4371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2bhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b6c872d-33f8-4452-b725-41047a59fd6c", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-f2bhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ac3fcf92d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:53.832672 containerd[1430]: 2025-08-13 00:11:53.809 [INFO][4371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2bhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.832672 containerd[1430]: 2025-08-13 00:11:53.809 [INFO][4371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ac3fcf92d9 ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2bhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.832672 containerd[1430]: 2025-08-13 00:11:53.813 [INFO][4371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2bhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.832672 containerd[1430]: 2025-08-13 00:11:53.814 [INFO][4371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2bhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b6c872d-33f8-4452-b725-41047a59fd6c", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05", Pod:"coredns-674b8bbfcf-f2bhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ac3fcf92d9", MAC:"2e:12:2f:c0:81:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:53.832672 containerd[1430]: 2025-08-13 00:11:53.826 [INFO][4371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2bhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:11:53.898435 containerd[1430]: time="2025-08-13T00:11:53.896868732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:53.898435 containerd[1430]: time="2025-08-13T00:11:53.898108923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:53.898435 containerd[1430]: time="2025-08-13T00:11:53.898126485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:53.898435 containerd[1430]: time="2025-08-13T00:11:53.898295826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:53.917632 systemd[1]: Started cri-containerd-9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05.scope - libcontainer container 9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05. Aug 13 00:11:53.930025 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:11:53.958111 containerd[1430]: time="2025-08-13T00:11:53.958064708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2bhn,Uid:9b6c872d-33f8-4452-b725-41047a59fd6c,Namespace:kube-system,Attempt:1,} returns sandbox id \"9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05\"" Aug 13 00:11:53.960183 kubelet[2484]: E0813 00:11:53.959501 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:53.964595 containerd[1430]: time="2025-08-13T00:11:53.964550539Z" level=info msg="CreateContainer within sandbox \"9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:11:53.980152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425680321.mount: Deactivated successfully. Aug 13 00:11:53.985383 containerd[1430]: time="2025-08-13T00:11:53.985234339Z" level=info msg="CreateContainer within sandbox \"9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1ec3d9dbd0f8e7b4a6a3e638abc2651e54cd69e5fefc75379a4dbeb3ef4da70\"" Aug 13 00:11:53.986034 containerd[1430]: time="2025-08-13T00:11:53.986009394Z" level=info msg="StartContainer for \"a1ec3d9dbd0f8e7b4a6a3e638abc2651e54cd69e5fefc75379a4dbeb3ef4da70\"" Aug 13 00:11:54.015572 systemd[1]: Started cri-containerd-a1ec3d9dbd0f8e7b4a6a3e638abc2651e54cd69e5fefc75379a4dbeb3ef4da70.scope - libcontainer container a1ec3d9dbd0f8e7b4a6a3e638abc2651e54cd69e5fefc75379a4dbeb3ef4da70. Aug 13 00:11:54.085226 containerd[1430]: time="2025-08-13T00:11:54.085045898Z" level=info msg="StartContainer for \"a1ec3d9dbd0f8e7b4a6a3e638abc2651e54cd69e5fefc75379a4dbeb3ef4da70\" returns successfully" Aug 13 00:11:54.202469 kubelet[2484]: I0813 00:11:54.202421 2484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:11:54.286583 systemd-networkd[1374]: cali15162155475: Gained IPv6LL Aug 13 00:11:54.391095 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:44686.service - OpenSSH per-connection server daemon (10.0.0.1:44686). Aug 13 00:11:54.470282 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 44686 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:11:54.474336 sshd[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:54.482143 systemd-logind[1418]: New session 8 of user core. Aug 13 00:11:54.491042 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:11:54.557306 containerd[1430]: time="2025-08-13T00:11:54.556617485Z" level=info msg="StopPodSandbox for \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\"" Aug 13 00:11:54.557306 containerd[1430]: time="2025-08-13T00:11:54.556656489Z" level=info msg="StopPodSandbox for \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\"" Aug 13 00:11:54.561060 containerd[1430]: time="2025-08-13T00:11:54.561027210Z" level=info msg="StopPodSandbox for \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\"" Aug 13 00:11:54.562636 containerd[1430]: time="2025-08-13T00:11:54.561168746Z" level=info msg="StopPodSandbox for \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\"" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.652 [INFO][4592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.653 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" iface="eth0" netns="/var/run/netns/cni-954e3379-45a6-9ab0-d9f0-539acbc8469f" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.653 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" iface="eth0" netns="/var/run/netns/cni-954e3379-45a6-9ab0-d9f0-539acbc8469f" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.654 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" iface="eth0" netns="/var/run/netns/cni-954e3379-45a6-9ab0-d9f0-539acbc8469f" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.654 [INFO][4592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.654 [INFO][4592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.697 [INFO][4622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.697 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.697 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.715 [WARNING][4622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.717 [INFO][4622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.724 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:54.732199 containerd[1430]: 2025-08-13 00:11:54.727 [INFO][4592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:11:54.734021 containerd[1430]: time="2025-08-13T00:11:54.733796645Z" level=info msg="TearDown network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\" successfully" Aug 13 00:11:54.734021 containerd[1430]: time="2025-08-13T00:11:54.733867214Z" level=info msg="StopPodSandbox for \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\" returns successfully" Aug 13 00:11:54.738551 systemd[1]: run-netns-cni\x2d954e3379\x2d45a6\x2d9ab0\x2dd9f0\x2d539acbc8469f.mount: Deactivated successfully. Aug 13 00:11:54.738900 containerd[1430]: time="2025-08-13T00:11:54.738620059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nqv9j,Uid:e81ec000-f2d6-44b6-854d-59a730f62e7e,Namespace:calico-system,Attempt:1,}" Aug 13 00:11:54.757157 kubelet[2484]: E0813 00:11:54.756415 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:54.775054 kubelet[2484]: I0813 00:11:54.774706 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f2bhn" podStartSLOduration=35.774688991 podStartE2EDuration="35.774688991s" podCreationTimestamp="2025-08-13 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:11:54.774670428 +0000 UTC m=+42.313435478" watchObservedRunningTime="2025-08-13 00:11:54.774688991 +0000 UTC m=+42.313454001" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.709 [INFO][4572] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.710 [INFO][4572] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" iface="eth0" netns="/var/run/netns/cni-4870dbb8-8305-b1b0-cb30-798d8ef494bb" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.710 [INFO][4572] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" iface="eth0" netns="/var/run/netns/cni-4870dbb8-8305-b1b0-cb30-798d8ef494bb" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.711 [INFO][4572] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" iface="eth0" netns="/var/run/netns/cni-4870dbb8-8305-b1b0-cb30-798d8ef494bb" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.711 [INFO][4572] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.711 [INFO][4572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.811 [INFO][4636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.811 [INFO][4636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.811 [INFO][4636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.837 [WARNING][4636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.837 [INFO][4636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.847 [INFO][4636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:54.854922 containerd[1430]: 2025-08-13 00:11:54.852 [INFO][4572] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:11:54.856631 containerd[1430]: time="2025-08-13T00:11:54.855621420Z" level=info msg="TearDown network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\" successfully" Aug 13 00:11:54.856631 containerd[1430]: time="2025-08-13T00:11:54.855665425Z" level=info msg="StopPodSandbox for \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\" returns successfully" Aug 13 00:11:54.858536 systemd[1]: run-netns-cni\x2d4870dbb8\x2d8305\x2db1b0\x2dcb30\x2d798d8ef494bb.mount: Deactivated successfully. Aug 13 00:11:54.858936 containerd[1430]: time="2025-08-13T00:11:54.858618456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bf9ff8c6-mp76l,Uid:559a4267-56cf-459b-a0e0-15a1cc2cb395,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.716 [INFO][4585] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.717 [INFO][4585] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" iface="eth0" netns="/var/run/netns/cni-68afb925-25f5-b615-6972-72d11a172257" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.717 [INFO][4585] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" iface="eth0" netns="/var/run/netns/cni-68afb925-25f5-b615-6972-72d11a172257" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.717 [INFO][4585] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" iface="eth0" netns="/var/run/netns/cni-68afb925-25f5-b615-6972-72d11a172257" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.717 [INFO][4585] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.718 [INFO][4585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.845 [INFO][4639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.845 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.847 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.861 [WARNING][4639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.861 [INFO][4639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.870 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:54.887672 containerd[1430]: 2025-08-13 00:11:54.879 [INFO][4585] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:11:54.888521 containerd[1430]: time="2025-08-13T00:11:54.888295387Z" level=info msg="TearDown network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\" successfully" Aug 13 00:11:54.889442 containerd[1430]: time="2025-08-13T00:11:54.888355595Z" level=info msg="StopPodSandbox for \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\" returns successfully" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.717 [INFO][4604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.717 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" iface="eth0" netns="/var/run/netns/cni-6e2c296d-9bd5-1f8a-fa2c-ba0914141f5a" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.717 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" iface="eth0" netns="/var/run/netns/cni-6e2c296d-9bd5-1f8a-fa2c-ba0914141f5a" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.718 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" iface="eth0" netns="/var/run/netns/cni-6e2c296d-9bd5-1f8a-fa2c-ba0914141f5a" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.718 [INFO][4604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.718 [INFO][4604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.850 [INFO][4645] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.850 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.870 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.888 [WARNING][4645] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.888 [INFO][4645] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.893 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:54.906712 containerd[1430]: 2025-08-13 00:11:54.899 [INFO][4604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:11:54.908116 containerd[1430]: time="2025-08-13T00:11:54.907978569Z" level=info msg="TearDown network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\" successfully" Aug 13 00:11:54.908116 containerd[1430]: time="2025-08-13T00:11:54.908014374Z" level=info msg="StopPodSandbox for \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\" returns successfully" Aug 13 00:11:54.910426 containerd[1430]: time="2025-08-13T00:11:54.909044296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bf9ff8c6-ds8gn,Uid:73c192b0-5021-43fd-851e-5152f889105e,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:11:54.917181 containerd[1430]: time="2025-08-13T00:11:54.917133419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tg9t5,Uid:fbdf544e-e157-4095-9b30-e5d9130445c2,Namespace:calico-system,Attempt:1,}" Aug 13 00:11:54.929206 sshd[4510]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:54.973228 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:44686.service: Deactivated successfully. Aug 13 00:11:54.977321 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:11:54.979817 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:11:54.981648 systemd-logind[1418]: Removed session 8. Aug 13 00:11:55.096489 systemd-networkd[1374]: cali59a113afdb4: Link UP Aug 13 00:11:55.096735 systemd-networkd[1374]: cali59a113afdb4: Gained carrier Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:54.896 [INFO][4656] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nqv9j-eth0 csi-node-driver- calico-system e81ec000-f2d6-44b6-854d-59a730f62e7e 1058 0 2025-08-13 00:11:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nqv9j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali59a113afdb4 [] [] }} ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Namespace="calico-system" Pod="csi-node-driver-nqv9j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqv9j-" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:54.896 [INFO][4656] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Namespace="calico-system" Pod="csi-node-driver-nqv9j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:54.992 [INFO][4693] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" HandleID="k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:54.992 [INFO][4693] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" HandleID="k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nqv9j", "timestamp":"2025-08-13 00:11:54.991641803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:54.992 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:54.992 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:54.992 [INFO][4693] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.022 [INFO][4693] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.036 [INFO][4693] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.045 [INFO][4693] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.048 [INFO][4693] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.052 [INFO][4693] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.053 [INFO][4693] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.055 [INFO][4693] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9 Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.067 [INFO][4693] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.076 [INFO][4693] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.076 [INFO][4693] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" host="localhost" Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.076 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:55.127299 containerd[1430]: 2025-08-13 00:11:55.076 [INFO][4693] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" HandleID="k8s-pod-network.cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:55.128244 containerd[1430]: 2025-08-13 00:11:55.086 [INFO][4656] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Namespace="calico-system" Pod="csi-node-driver-nqv9j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqv9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nqv9j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e81ec000-f2d6-44b6-854d-59a730f62e7e", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nqv9j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59a113afdb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.128244 containerd[1430]: 2025-08-13 00:11:55.087 [INFO][4656] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Namespace="calico-system" Pod="csi-node-driver-nqv9j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:55.128244 containerd[1430]: 2025-08-13 00:11:55.087 [INFO][4656] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59a113afdb4 ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Namespace="calico-system" Pod="csi-node-driver-nqv9j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:55.128244 containerd[1430]: 2025-08-13 00:11:55.094 [INFO][4656] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Namespace="calico-system" Pod="csi-node-driver-nqv9j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:55.128244 containerd[1430]: 2025-08-13 00:11:55.103 [INFO][4656] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Namespace="calico-system" Pod="csi-node-driver-nqv9j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqv9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nqv9j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e81ec000-f2d6-44b6-854d-59a730f62e7e", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9", Pod:"csi-node-driver-nqv9j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59a113afdb4", MAC:"ce:4e:df:8b:2c:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.128244 containerd[1430]: 2025-08-13 00:11:55.118 [INFO][4656] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9" Namespace="calico-system" Pod="csi-node-driver-nqv9j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:11:55.171089 containerd[1430]: time="2025-08-13T00:11:55.169845910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:55.171089 containerd[1430]: time="2025-08-13T00:11:55.169906277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:55.171089 containerd[1430]: time="2025-08-13T00:11:55.169917638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.171089 containerd[1430]: time="2025-08-13T00:11:55.170041372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.190938 systemd-networkd[1374]: cali0f75d7584ee: Link UP Aug 13 00:11:55.191155 systemd-networkd[1374]: cali0f75d7584ee: Gained carrier Aug 13 00:11:55.206595 systemd[1]: Started cri-containerd-cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9.scope - libcontainer container cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9. Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:54.959 [INFO][4682] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0 calico-apiserver-69bf9ff8c6- calico-apiserver 559a4267-56cf-459b-a0e0-15a1cc2cb395 1059 0 2025-08-13 00:11:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69bf9ff8c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69bf9ff8c6-mp76l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f75d7584ee [] [] }} ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-mp76l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:54.959 [INFO][4682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-mp76l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.014 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" HandleID="k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.015 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" HandleID="k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69bf9ff8c6-mp76l", "timestamp":"2025-08-13 00:11:55.014745033 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.015 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.076 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.076 [INFO][4703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.125 [INFO][4703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.140 [INFO][4703] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.149 [INFO][4703] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.153 [INFO][4703] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.159 [INFO][4703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.159 [INFO][4703] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.162 [INFO][4703] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390 Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.171 [INFO][4703] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.178 [INFO][4703] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.178 [INFO][4703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" host="localhost" Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.178 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:55.215247 containerd[1430]: 2025-08-13 00:11:55.178 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" HandleID="k8s-pod-network.878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:55.215854 containerd[1430]: 2025-08-13 00:11:55.186 [INFO][4682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-mp76l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0", GenerateName:"calico-apiserver-69bf9ff8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"559a4267-56cf-459b-a0e0-15a1cc2cb395", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bf9ff8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69bf9ff8c6-mp76l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f75d7584ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.215854 containerd[1430]: 2025-08-13 00:11:55.186 [INFO][4682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-mp76l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:55.215854 containerd[1430]: 2025-08-13 00:11:55.186 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f75d7584ee ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-mp76l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:55.215854 containerd[1430]: 2025-08-13 00:11:55.192 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-mp76l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:55.215854 containerd[1430]: 2025-08-13 00:11:55.193 [INFO][4682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-mp76l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0", GenerateName:"calico-apiserver-69bf9ff8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"559a4267-56cf-459b-a0e0-15a1cc2cb395", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bf9ff8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390", Pod:"calico-apiserver-69bf9ff8c6-mp76l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f75d7584ee", MAC:"72:a2:85:ea:60:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.215854 containerd[1430]: 2025-08-13 00:11:55.208 [INFO][4682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-mp76l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:11:55.241811 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:11:55.245696 containerd[1430]: time="2025-08-13T00:11:55.245243917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:55.245696 containerd[1430]: time="2025-08-13T00:11:55.245309645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:55.261867 containerd[1430]: time="2025-08-13T00:11:55.245376573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.261867 containerd[1430]: time="2025-08-13T00:11:55.261653106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.273642 containerd[1430]: time="2025-08-13T00:11:55.273601015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nqv9j,Uid:e81ec000-f2d6-44b6-854d-59a730f62e7e,Namespace:calico-system,Attempt:1,} returns sandbox id \"cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9\"" Aug 13 00:11:55.286339 systemd-networkd[1374]: califc5489654d2: Link UP Aug 13 00:11:55.289172 systemd-networkd[1374]: califc5489654d2: Gained carrier Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.069 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0 calico-apiserver-69bf9ff8c6- calico-apiserver 73c192b0-5021-43fd-851e-5152f889105e 1060 0 2025-08-13 00:11:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69bf9ff8c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69bf9ff8c6-ds8gn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc5489654d2 [] [] }} ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-ds8gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.071 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-ds8gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.115 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" HandleID="k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.116 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" HandleID="k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003436c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69bf9ff8c6-ds8gn", "timestamp":"2025-08-13 00:11:55.115772742 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.116 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.178 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.178 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.222 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.242 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.251 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.254 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.257 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.257 [INFO][4744] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.258 [INFO][4744] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.263 [INFO][4744] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.273 [INFO][4744] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.274 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" host="localhost" Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.274 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:55.310227 containerd[1430]: 2025-08-13 00:11:55.274 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" HandleID="k8s-pod-network.0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:55.311135 containerd[1430]: 2025-08-13 00:11:55.280 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-ds8gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0", GenerateName:"calico-apiserver-69bf9ff8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c192b0-5021-43fd-851e-5152f889105e", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bf9ff8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69bf9ff8c6-ds8gn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc5489654d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.311135 containerd[1430]: 2025-08-13 00:11:55.280 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-ds8gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:55.311135 containerd[1430]: 2025-08-13 00:11:55.280 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc5489654d2 ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-ds8gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:55.311135 containerd[1430]: 2025-08-13 00:11:55.283 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-ds8gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:55.311135 containerd[1430]: 2025-08-13 00:11:55.284 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-ds8gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0", GenerateName:"calico-apiserver-69bf9ff8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c192b0-5021-43fd-851e-5152f889105e", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bf9ff8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f", Pod:"calico-apiserver-69bf9ff8c6-ds8gn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc5489654d2", MAC:"72:07:88:d7:67:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.311135 containerd[1430]: 2025-08-13 00:11:55.306 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f" Namespace="calico-apiserver" Pod="calico-apiserver-69bf9ff8c6-ds8gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:11:55.313957 systemd[1]: Started cri-containerd-878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390.scope - libcontainer container 878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390. Aug 13 00:11:55.332948 containerd[1430]: time="2025-08-13T00:11:55.332643401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:55.332948 containerd[1430]: time="2025-08-13T00:11:55.332731291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:55.332948 containerd[1430]: time="2025-08-13T00:11:55.332744253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.333242 containerd[1430]: time="2025-08-13T00:11:55.333150820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.354721 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:11:55.358786 systemd[1]: Started cri-containerd-0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f.scope - libcontainer container 0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f. Aug 13 00:11:55.380912 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:11:55.393718 containerd[1430]: time="2025-08-13T00:11:55.392353584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bf9ff8c6-mp76l,Uid:559a4267-56cf-459b-a0e0-15a1cc2cb395,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390\"" Aug 13 00:11:55.396623 systemd-networkd[1374]: cali14273bb6247: Link UP Aug 13 00:11:55.396821 systemd-networkd[1374]: cali14273bb6247: Gained carrier Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.087 [INFO][4724] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0 goldmane-768f4c5c69- calico-system fbdf544e-e157-4095-9b30-e5d9130445c2 1061 0 2025-08-13 00:11:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-tg9t5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali14273bb6247 [] [] }} ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Namespace="calico-system" Pod="goldmane-768f4c5c69-tg9t5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tg9t5-" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.087 [INFO][4724] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Namespace="calico-system" Pod="goldmane-768f4c5c69-tg9t5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.152 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" HandleID="k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.152 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" HandleID="k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-tg9t5", "timestamp":"2025-08-13 00:11:55.152557459 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.152 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.274 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.274 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.323 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.343 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.357 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.361 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.365 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.365 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.367 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6 Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.373 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.383 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.383 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" host="localhost" Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.383 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:55.412920 containerd[1430]: 2025-08-13 00:11:55.383 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" HandleID="k8s-pod-network.073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:55.414194 containerd[1430]: 2025-08-13 00:11:55.393 [INFO][4724] cni-plugin/k8s.go 418: Populated endpoint ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Namespace="calico-system" Pod="goldmane-768f4c5c69-tg9t5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fbdf544e-e157-4095-9b30-e5d9130445c2", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-tg9t5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14273bb6247", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.414194 containerd[1430]: 2025-08-13 00:11:55.393 [INFO][4724] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Namespace="calico-system" Pod="goldmane-768f4c5c69-tg9t5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:55.414194 containerd[1430]: 2025-08-13 00:11:55.393 [INFO][4724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14273bb6247 ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Namespace="calico-system" Pod="goldmane-768f4c5c69-tg9t5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:55.414194 containerd[1430]: 2025-08-13 00:11:55.395 [INFO][4724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Namespace="calico-system" Pod="goldmane-768f4c5c69-tg9t5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:55.414194 containerd[1430]: 2025-08-13 00:11:55.395 [INFO][4724] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Namespace="calico-system" Pod="goldmane-768f4c5c69-tg9t5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fbdf544e-e157-4095-9b30-e5d9130445c2", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6", Pod:"goldmane-768f4c5c69-tg9t5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14273bb6247", MAC:"92:b1:31:2c:f5:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.414194 containerd[1430]: 2025-08-13 00:11:55.408 [INFO][4724] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6" Namespace="calico-system" Pod="goldmane-768f4c5c69-tg9t5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:11:55.432424 containerd[1430]: time="2025-08-13T00:11:55.432379559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bf9ff8c6-ds8gn,Uid:73c192b0-5021-43fd-851e-5152f889105e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f\"" Aug 13 00:11:55.442505 containerd[1430]: time="2025-08-13T00:11:55.442130933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:55.442505 containerd[1430]: time="2025-08-13T00:11:55.442296512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:55.442505 containerd[1430]: time="2025-08-13T00:11:55.442314674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.443409 containerd[1430]: time="2025-08-13T00:11:55.443256744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.466586 systemd[1]: Started cri-containerd-073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6.scope - libcontainer container 073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6. Aug 13 00:11:55.484205 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:11:55.514844 containerd[1430]: time="2025-08-13T00:11:55.514803944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tg9t5,Uid:fbdf544e-e157-4095-9b30-e5d9130445c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6\"" Aug 13 00:11:55.555718 containerd[1430]: time="2025-08-13T00:11:55.555546922Z" level=info msg="StopPodSandbox for \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\"" Aug 13 00:11:55.630707 systemd-networkd[1374]: cali1ac3fcf92d9: Gained IPv6LL Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.612 [INFO][4977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.612 [INFO][4977] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" iface="eth0" netns="/var/run/netns/cni-4ca168f3-d673-6a4b-a97e-aad82d9e338f" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.613 [INFO][4977] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" iface="eth0" netns="/var/run/netns/cni-4ca168f3-d673-6a4b-a97e-aad82d9e338f" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.613 [INFO][4977] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" iface="eth0" netns="/var/run/netns/cni-4ca168f3-d673-6a4b-a97e-aad82d9e338f" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.613 [INFO][4977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.613 [INFO][4977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.648 [INFO][4986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.648 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.648 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.657 [WARNING][4986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.657 [INFO][4986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.659 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:55.664791 containerd[1430]: 2025-08-13 00:11:55.662 [INFO][4977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:11:55.665312 containerd[1430]: time="2025-08-13T00:11:55.664824389Z" level=info msg="TearDown network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\" successfully" Aug 13 00:11:55.665312 containerd[1430]: time="2025-08-13T00:11:55.664852232Z" level=info msg="StopPodSandbox for \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\" returns successfully" Aug 13 00:11:55.665518 kubelet[2484]: E0813 00:11:55.665491 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:55.666646 containerd[1430]: time="2025-08-13T00:11:55.666593755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lh4nv,Uid:c774b47c-e08c-42ad-b562-dd791cc0ed35,Namespace:kube-system,Attempt:1,}" Aug 13 00:11:55.685610 systemd[1]: run-netns-cni\x2d4ca168f3\x2dd673\x2d6a4b\x2da97e\x2daad82d9e338f.mount: Deactivated successfully. Aug 13 00:11:55.685896 systemd[1]: run-netns-cni\x2d68afb925\x2d25f5\x2db615\x2d6972\x2d72d11a172257.mount: Deactivated successfully. Aug 13 00:11:55.685964 systemd[1]: run-netns-cni\x2d6e2c296d\x2d9bd5\x2d1f8a\x2dfa2c\x2dba0914141f5a.mount: Deactivated successfully. Aug 13 00:11:55.771580 kubelet[2484]: E0813 00:11:55.771545 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:55.881493 systemd-networkd[1374]: cali99504baaeb2: Link UP Aug 13 00:11:55.881722 systemd-networkd[1374]: cali99504baaeb2: Gained carrier Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.746 [INFO][4993] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0 coredns-674b8bbfcf- kube-system c774b47c-e08c-42ad-b562-dd791cc0ed35 1094 0 2025-08-13 00:11:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lh4nv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali99504baaeb2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lh4nv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lh4nv-" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.746 [INFO][4993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lh4nv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.794 [INFO][5007] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" HandleID="k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.794 [INFO][5007] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" HandleID="k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137400), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lh4nv", "timestamp":"2025-08-13 00:11:55.787063364 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.794 [INFO][5007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.794 [INFO][5007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.794 [INFO][5007] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.808 [INFO][5007] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.825 [INFO][5007] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.834 [INFO][5007] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.839 [INFO][5007] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.844 [INFO][5007] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.844 [INFO][5007] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.846 [INFO][5007] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7 Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.854 [INFO][5007] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.867 [INFO][5007] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.868 [INFO][5007] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" host="localhost" Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.868 [INFO][5007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:11:55.898296 containerd[1430]: 2025-08-13 00:11:55.868 [INFO][5007] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" HandleID="k8s-pod-network.435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.899241 containerd[1430]: 2025-08-13 00:11:55.873 [INFO][4993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lh4nv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c774b47c-e08c-42ad-b562-dd791cc0ed35", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lh4nv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99504baaeb2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.899241 containerd[1430]: 2025-08-13 00:11:55.875 [INFO][4993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lh4nv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.899241 containerd[1430]: 2025-08-13 00:11:55.875 [INFO][4993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99504baaeb2 ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lh4nv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.899241 containerd[1430]: 2025-08-13 00:11:55.878 [INFO][4993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lh4nv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.899241 containerd[1430]: 2025-08-13 00:11:55.882 [INFO][4993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lh4nv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c774b47c-e08c-42ad-b562-dd791cc0ed35", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7", Pod:"coredns-674b8bbfcf-lh4nv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99504baaeb2", MAC:"da:d1:a3:d0:45:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:11:55.899241 containerd[1430]: 2025-08-13 00:11:55.893 [INFO][4993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lh4nv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:11:55.918695 containerd[1430]: time="2025-08-13T00:11:55.918528732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:55.918695 containerd[1430]: time="2025-08-13T00:11:55.918598900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:55.918695 containerd[1430]: time="2025-08-13T00:11:55.918613622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.919174 containerd[1430]: time="2025-08-13T00:11:55.918701352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:55.950558 systemd[1]: Started cri-containerd-435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7.scope - libcontainer container 435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7. Aug 13 00:11:55.966715 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:11:55.983176 containerd[1430]: time="2025-08-13T00:11:55.983135005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lh4nv,Uid:c774b47c-e08c-42ad-b562-dd791cc0ed35,Namespace:kube-system,Attempt:1,} returns sandbox id \"435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7\"" Aug 13 00:11:55.984193 kubelet[2484]: E0813 00:11:55.984155 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:56.035888 containerd[1430]: time="2025-08-13T00:11:56.035832126Z" level=info msg="CreateContainer within sandbox \"435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:11:56.039896 containerd[1430]: time="2025-08-13T00:11:56.039854783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:56.041593 containerd[1430]: time="2025-08-13T00:11:56.041537535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:11:56.042578 containerd[1430]: time="2025-08-13T00:11:56.042480242Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:56.045453 containerd[1430]: time="2025-08-13T00:11:56.045396454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:56.046153 containerd[1430]: time="2025-08-13T00:11:56.046113695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.074797991s" Aug 13 00:11:56.046153 containerd[1430]: time="2025-08-13T00:11:56.046152020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:11:56.047433 containerd[1430]: time="2025-08-13T00:11:56.047403602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:11:56.054482 containerd[1430]: time="2025-08-13T00:11:56.054438602Z" level=info msg="CreateContainer within sandbox \"435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"311ea01b4344f3e52fbc9e8daeb61e54b162296790e555ce951e9631a146dc08\"" Aug 13 00:11:56.055827 containerd[1430]: time="2025-08-13T00:11:56.055550689Z" level=info msg="StartContainer for \"311ea01b4344f3e52fbc9e8daeb61e54b162296790e555ce951e9631a146dc08\"" Aug 13 00:11:56.058981 containerd[1430]: time="2025-08-13T00:11:56.058947195Z" level=info msg="CreateContainer within sandbox \"73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:11:56.069609 containerd[1430]: time="2025-08-13T00:11:56.069566763Z" level=info msg="CreateContainer within sandbox \"73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bdd390f20620cd249ea32afbdef69475bda009051481763c409a4be9fe3326fd\"" Aug 13 00:11:56.070266 containerd[1430]: time="2025-08-13T00:11:56.070237400Z" level=info msg="StartContainer for \"bdd390f20620cd249ea32afbdef69475bda009051481763c409a4be9fe3326fd\"" Aug 13 00:11:56.088552 systemd[1]: Started cri-containerd-311ea01b4344f3e52fbc9e8daeb61e54b162296790e555ce951e9631a146dc08.scope - libcontainer container 311ea01b4344f3e52fbc9e8daeb61e54b162296790e555ce951e9631a146dc08. Aug 13 00:11:56.109586 systemd[1]: Started cri-containerd-bdd390f20620cd249ea32afbdef69475bda009051481763c409a4be9fe3326fd.scope - libcontainer container bdd390f20620cd249ea32afbdef69475bda009051481763c409a4be9fe3326fd. Aug 13 00:11:56.146321 containerd[1430]: time="2025-08-13T00:11:56.145983257Z" level=info msg="StartContainer for \"311ea01b4344f3e52fbc9e8daeb61e54b162296790e555ce951e9631a146dc08\" returns successfully" Aug 13 00:11:56.163086 containerd[1430]: time="2025-08-13T00:11:56.162498936Z" level=info msg="StartContainer for \"bdd390f20620cd249ea32afbdef69475bda009051481763c409a4be9fe3326fd\" returns successfully" Aug 13 00:11:56.270598 systemd-networkd[1374]: cali0f75d7584ee: Gained IPv6LL Aug 13 00:11:56.590493 systemd-networkd[1374]: cali59a113afdb4: Gained IPv6LL Aug 13 00:11:56.682871 systemd[1]: run-containerd-runc-k8s.io-435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7-runc.Og4xa8.mount: Deactivated successfully. Aug 13 00:11:56.776367 kubelet[2484]: E0813 00:11:56.776318 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:56.781077 kubelet[2484]: E0813 00:11:56.780865 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:56.782552 systemd-networkd[1374]: cali14273bb6247: Gained IPv6LL Aug 13 00:11:56.818325 kubelet[2484]: I0813 00:11:56.816199 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lh4nv" podStartSLOduration=37.816180421 podStartE2EDuration="37.816180421s" podCreationTimestamp="2025-08-13 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:11:56.800493876 +0000 UTC m=+44.339258886" watchObservedRunningTime="2025-08-13 00:11:56.816180421 +0000 UTC m=+44.354945471" Aug 13 00:11:56.818325 kubelet[2484]: I0813 00:11:56.816507 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54446d6f8c-zdlwk" podStartSLOduration=21.74035703 podStartE2EDuration="24.816500137s" podCreationTimestamp="2025-08-13 00:11:32 +0000 UTC" firstStartedPulling="2025-08-13 00:11:52.971064353 +0000 UTC m=+40.509829403" lastFinishedPulling="2025-08-13 00:11:56.0472075 +0000 UTC m=+43.585972510" observedRunningTime="2025-08-13 00:11:56.814892074 +0000 UTC m=+44.353657124" watchObservedRunningTime="2025-08-13 00:11:56.816500137 +0000 UTC m=+44.355265147" Aug 13 00:11:57.231542 systemd-networkd[1374]: cali99504baaeb2: Gained IPv6LL Aug 13 00:11:57.272582 containerd[1430]: time="2025-08-13T00:11:57.272509253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:57.273812 containerd[1430]: time="2025-08-13T00:11:57.273735949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:11:57.274781 containerd[1430]: time="2025-08-13T00:11:57.274718939Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:57.281293 containerd[1430]: time="2025-08-13T00:11:57.281234185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:57.281860 containerd[1430]: time="2025-08-13T00:11:57.281823050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.234383124s" Aug 13 00:11:57.281917 containerd[1430]: time="2025-08-13T00:11:57.281858934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:11:57.286041 containerd[1430]: time="2025-08-13T00:11:57.285960671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:11:57.288691 containerd[1430]: time="2025-08-13T00:11:57.288653211Z" level=info msg="CreateContainer within sandbox \"cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:11:57.317774 containerd[1430]: time="2025-08-13T00:11:57.317719409Z" level=info msg="CreateContainer within sandbox \"cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5a1404da555030c29dfbfed69a00f04114bf81779261a1de98cefb040a1aeaee\"" Aug 13 00:11:57.318685 containerd[1430]: time="2025-08-13T00:11:57.318628790Z" level=info msg="StartContainer for \"5a1404da555030c29dfbfed69a00f04114bf81779261a1de98cefb040a1aeaee\"" Aug 13 00:11:57.351563 systemd[1]: Started cri-containerd-5a1404da555030c29dfbfed69a00f04114bf81779261a1de98cefb040a1aeaee.scope - libcontainer container 5a1404da555030c29dfbfed69a00f04114bf81779261a1de98cefb040a1aeaee. Aug 13 00:11:57.358541 systemd-networkd[1374]: califc5489654d2: Gained IPv6LL Aug 13 00:11:57.392981 containerd[1430]: time="2025-08-13T00:11:57.392933788Z" level=info msg="StartContainer for \"5a1404da555030c29dfbfed69a00f04114bf81779261a1de98cefb040a1aeaee\" returns successfully" Aug 13 00:11:57.785033 kubelet[2484]: E0813 00:11:57.785004 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:58.787817 kubelet[2484]: E0813 00:11:58.787779 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:11:59.170588 containerd[1430]: time="2025-08-13T00:11:59.170516223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:59.172849 containerd[1430]: time="2025-08-13T00:11:59.172592645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:11:59.175033 containerd[1430]: time="2025-08-13T00:11:59.174981821Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:59.178930 containerd[1430]: time="2025-08-13T00:11:59.178882359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:59.179563 containerd[1430]: time="2025-08-13T00:11:59.179484703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.893478107s" Aug 13 00:11:59.179563 containerd[1430]: time="2025-08-13T00:11:59.179518907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:11:59.182205 containerd[1430]: time="2025-08-13T00:11:59.181787950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:11:59.185268 containerd[1430]: time="2025-08-13T00:11:59.185193274Z" level=info msg="CreateContainer within sandbox \"878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:11:59.203007 containerd[1430]: time="2025-08-13T00:11:59.202959617Z" level=info msg="CreateContainer within sandbox \"878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f6f04e5b839e8815c0bcda788541ea366a12fbd34b9166e6bfedd135433ce243\"" Aug 13 00:11:59.204130 containerd[1430]: time="2025-08-13T00:11:59.204083098Z" level=info msg="StartContainer for \"f6f04e5b839e8815c0bcda788541ea366a12fbd34b9166e6bfedd135433ce243\"" Aug 13 00:11:59.243588 systemd[1]: Started cri-containerd-f6f04e5b839e8815c0bcda788541ea366a12fbd34b9166e6bfedd135433ce243.scope - libcontainer container f6f04e5b839e8815c0bcda788541ea366a12fbd34b9166e6bfedd135433ce243. Aug 13 00:11:59.279099 containerd[1430]: time="2025-08-13T00:11:59.279054247Z" level=info msg="StartContainer for \"f6f04e5b839e8815c0bcda788541ea366a12fbd34b9166e6bfedd135433ce243\" returns successfully" Aug 13 00:11:59.441456 containerd[1430]: time="2025-08-13T00:11:59.441318466Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:11:59.443143 containerd[1430]: time="2025-08-13T00:11:59.443106537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:11:59.444856 containerd[1430]: time="2025-08-13T00:11:59.444812120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 262.987526ms" Aug 13 00:11:59.444856 containerd[1430]: time="2025-08-13T00:11:59.444856685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:11:59.446078 containerd[1430]: time="2025-08-13T00:11:59.445756461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:11:59.449685 containerd[1430]: time="2025-08-13T00:11:59.449651958Z" level=info msg="CreateContainer within sandbox \"0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:11:59.468804 containerd[1430]: time="2025-08-13T00:11:59.468746803Z" level=info msg="CreateContainer within sandbox \"0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe822130cffc7506e510f628592d8ead9b83298f20c815efbfa40e3d07d941c9\"" Aug 13 00:11:59.469555 containerd[1430]: time="2025-08-13T00:11:59.469441398Z" level=info msg="StartContainer for \"fe822130cffc7506e510f628592d8ead9b83298f20c815efbfa40e3d07d941c9\"" Aug 13 00:11:59.512523 systemd[1]: Started cri-containerd-fe822130cffc7506e510f628592d8ead9b83298f20c815efbfa40e3d07d941c9.scope - libcontainer container fe822130cffc7506e510f628592d8ead9b83298f20c815efbfa40e3d07d941c9. Aug 13 00:11:59.546203 containerd[1430]: time="2025-08-13T00:11:59.546154334Z" level=info msg="StartContainer for \"fe822130cffc7506e510f628592d8ead9b83298f20c815efbfa40e3d07d941c9\" returns successfully" Aug 13 00:11:59.807285 kubelet[2484]: I0813 00:11:59.806299 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-mp76l" podStartSLOduration=28.023407866 podStartE2EDuration="31.806279793s" podCreationTimestamp="2025-08-13 00:11:28 +0000 UTC" firstStartedPulling="2025-08-13 00:11:55.398441772 +0000 UTC m=+42.937206782" lastFinishedPulling="2025-08-13 00:11:59.181313659 +0000 UTC m=+46.720078709" observedRunningTime="2025-08-13 00:11:59.803934822 +0000 UTC m=+47.342699872" watchObservedRunningTime="2025-08-13 00:11:59.806279793 +0000 UTC m=+47.345044803" Aug 13 00:11:59.821662 kubelet[2484]: I0813 00:11:59.819774 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69bf9ff8c6-ds8gn" podStartSLOduration=27.808460136 podStartE2EDuration="31.819758877s" podCreationTimestamp="2025-08-13 00:11:28 +0000 UTC" firstStartedPulling="2025-08-13 00:11:55.434294022 +0000 UTC m=+42.973059072" lastFinishedPulling="2025-08-13 00:11:59.445592763 +0000 UTC m=+46.984357813" observedRunningTime="2025-08-13 00:11:59.81941288 +0000 UTC m=+47.358177930" watchObservedRunningTime="2025-08-13 00:11:59.819758877 +0000 UTC m=+47.358523927" Aug 13 00:11:59.953119 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:44698.service - OpenSSH per-connection server daemon (10.0.0.1:44698). Aug 13 00:12:00.021385 sshd[5320]: Accepted publickey for core from 10.0.0.1 port 44698 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:00.023330 sshd[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:00.032434 systemd-logind[1418]: New session 9 of user core. Aug 13 00:12:00.037553 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:12:00.382447 sshd[5320]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:00.386635 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:44698.service: Deactivated successfully. Aug 13 00:12:00.390070 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:12:00.392999 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:12:00.395525 systemd-logind[1418]: Removed session 9. Aug 13 00:12:02.246937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170130866.mount: Deactivated successfully. Aug 13 00:12:02.676379 containerd[1430]: time="2025-08-13T00:12:02.676310141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:12:02.677050 containerd[1430]: time="2025-08-13T00:12:02.677014533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:12:02.678015 containerd[1430]: time="2025-08-13T00:12:02.677978671Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:12:02.683062 containerd[1430]: time="2025-08-13T00:12:02.683014103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:12:02.684740 containerd[1430]: time="2025-08-13T00:12:02.684113054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.23832327s" Aug 13 00:12:02.684740 containerd[1430]: time="2025-08-13T00:12:02.684155619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:12:02.685229 containerd[1430]: time="2025-08-13T00:12:02.685190204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:12:02.688222 containerd[1430]: time="2025-08-13T00:12:02.688188588Z" level=info msg="CreateContainer within sandbox \"073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:12:02.703699 containerd[1430]: time="2025-08-13T00:12:02.703652200Z" level=info msg="CreateContainer within sandbox \"073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1419b5f6606054c90554fa869cc11734247640870a92def3a60d106f2abbb95d\"" Aug 13 00:12:02.704727 containerd[1430]: time="2025-08-13T00:12:02.704682904Z" level=info msg="StartContainer for \"1419b5f6606054c90554fa869cc11734247640870a92def3a60d106f2abbb95d\"" Aug 13 00:12:02.704849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100180474.mount: Deactivated successfully. Aug 13 00:12:02.786560 systemd[1]: Started cri-containerd-1419b5f6606054c90554fa869cc11734247640870a92def3a60d106f2abbb95d.scope - libcontainer container 1419b5f6606054c90554fa869cc11734247640870a92def3a60d106f2abbb95d. Aug 13 00:12:02.886765 containerd[1430]: time="2025-08-13T00:12:02.886718321Z" level=info msg="StartContainer for \"1419b5f6606054c90554fa869cc11734247640870a92def3a60d106f2abbb95d\" returns successfully" Aug 13 00:12:04.749044 containerd[1430]: time="2025-08-13T00:12:04.748991171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:12:04.752936 containerd[1430]: time="2025-08-13T00:12:04.752713978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:12:04.756181 containerd[1430]: time="2025-08-13T00:12:04.756124274Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:12:04.761084 containerd[1430]: time="2025-08-13T00:12:04.761030757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:12:04.762068 containerd[1430]: time="2025-08-13T00:12:04.762028455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.076690477s" Aug 13 00:12:04.762116 containerd[1430]: time="2025-08-13T00:12:04.762069459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:12:04.790745 containerd[1430]: time="2025-08-13T00:12:04.790696879Z" level=info msg="CreateContainer within sandbox \"cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:12:04.871425 containerd[1430]: time="2025-08-13T00:12:04.871315820Z" level=info msg="CreateContainer within sandbox \"cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0e164803cb27bfe20752881a7a507bac71dceccc1e7c32a5d47f091cdb0839d9\"" Aug 13 00:12:04.872003 containerd[1430]: time="2025-08-13T00:12:04.871978045Z" level=info msg="StartContainer for \"0e164803cb27bfe20752881a7a507bac71dceccc1e7c32a5d47f091cdb0839d9\"" Aug 13 00:12:04.913578 systemd[1]: Started cri-containerd-0e164803cb27bfe20752881a7a507bac71dceccc1e7c32a5d47f091cdb0839d9.scope - libcontainer container 0e164803cb27bfe20752881a7a507bac71dceccc1e7c32a5d47f091cdb0839d9. Aug 13 00:12:04.946734 containerd[1430]: time="2025-08-13T00:12:04.946274003Z" level=info msg="StartContainer for \"0e164803cb27bfe20752881a7a507bac71dceccc1e7c32a5d47f091cdb0839d9\" returns successfully" Aug 13 00:12:05.401780 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:38980.service - OpenSSH per-connection server daemon (10.0.0.1:38980). Aug 13 00:12:05.477699 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 38980 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:05.479803 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:05.484434 systemd-logind[1418]: New session 10 of user core. Aug 13 00:12:05.492604 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:12:05.628556 kubelet[2484]: I0813 00:12:05.628498 2484 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:12:05.632306 kubelet[2484]: I0813 00:12:05.632253 2484 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:12:05.846986 sshd[5482]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:05.849710 kubelet[2484]: I0813 00:12:05.847330 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-tg9t5" podStartSLOduration=27.67866982 podStartE2EDuration="34.84731316s" podCreationTimestamp="2025-08-13 00:11:31 +0000 UTC" firstStartedPulling="2025-08-13 00:11:55.516328121 +0000 UTC m=+43.055093171" lastFinishedPulling="2025-08-13 00:12:02.684971461 +0000 UTC m=+50.223736511" observedRunningTime="2025-08-13 00:12:03.843889105 +0000 UTC m=+51.382654155" watchObservedRunningTime="2025-08-13 00:12:05.84731316 +0000 UTC m=+53.386078170" Aug 13 00:12:05.849710 kubelet[2484]: I0813 00:12:05.847556 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nqv9j" podStartSLOduration=24.360556685 podStartE2EDuration="33.847551103s" podCreationTimestamp="2025-08-13 00:11:32 +0000 UTC" firstStartedPulling="2025-08-13 00:11:55.27587596 +0000 UTC m=+42.814641010" lastFinishedPulling="2025-08-13 00:12:04.762870378 +0000 UTC m=+52.301635428" observedRunningTime="2025-08-13 00:12:05.846781028 +0000 UTC m=+53.385546078" watchObservedRunningTime="2025-08-13 00:12:05.847551103 +0000 UTC m=+53.386316153" Aug 13 00:12:05.862337 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:38980.service: Deactivated successfully. Aug 13 00:12:05.865739 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:12:05.868730 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:12:05.874766 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:38986.service - OpenSSH per-connection server daemon (10.0.0.1:38986). Aug 13 00:12:05.876390 systemd-logind[1418]: Removed session 10. Aug 13 00:12:05.907802 sshd[5499]: Accepted publickey for core from 10.0.0.1 port 38986 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:05.909332 sshd[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:05.913987 systemd-logind[1418]: New session 11 of user core. Aug 13 00:12:05.924730 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:12:06.152697 sshd[5499]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:06.165134 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:38986.service: Deactivated successfully. Aug 13 00:12:06.167258 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:12:06.169953 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:12:06.180917 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:38996.service - OpenSSH per-connection server daemon (10.0.0.1:38996). Aug 13 00:12:06.183318 systemd-logind[1418]: Removed session 11. Aug 13 00:12:06.218959 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 38996 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:06.220543 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:06.224784 systemd-logind[1418]: New session 12 of user core. Aug 13 00:12:06.234600 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:12:06.388534 sshd[5513]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:06.394289 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:38996.service: Deactivated successfully. Aug 13 00:12:06.396206 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:12:06.397006 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:12:06.397986 systemd-logind[1418]: Removed session 12. Aug 13 00:12:11.405789 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:39010.service - OpenSSH per-connection server daemon (10.0.0.1:39010). Aug 13 00:12:11.459942 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 39010 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:11.461680 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:11.466490 systemd-logind[1418]: New session 13 of user core. Aug 13 00:12:11.473648 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:12:11.650752 sshd[5561]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:11.657566 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:39010.service: Deactivated successfully. Aug 13 00:12:11.659966 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:12:11.660950 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:12:11.661870 systemd-logind[1418]: Removed session 13. Aug 13 00:12:12.536873 containerd[1430]: time="2025-08-13T00:12:12.536828147Z" level=info msg="StopPodSandbox for \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\"" Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.599 [WARNING][5585] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0", GenerateName:"calico-apiserver-69bf9ff8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"559a4267-56cf-459b-a0e0-15a1cc2cb395", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bf9ff8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390", Pod:"calico-apiserver-69bf9ff8c6-mp76l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f75d7584ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.599 [INFO][5585] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.599 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" iface="eth0" netns="" Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.599 [INFO][5585] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.599 [INFO][5585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.619 [INFO][5595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.619 [INFO][5595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.619 [INFO][5595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.629 [WARNING][5595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.629 [INFO][5595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.631 [INFO][5595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:12.635537 containerd[1430]: 2025-08-13 00:12:12.633 [INFO][5585] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:12:12.635537 containerd[1430]: time="2025-08-13T00:12:12.635615257Z" level=info msg="TearDown network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\" successfully" Aug 13 00:12:12.635537 containerd[1430]: time="2025-08-13T00:12:12.635648220Z" level=info msg="StopPodSandbox for \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\" returns successfully" Aug 13 00:12:12.636834 containerd[1430]: time="2025-08-13T00:12:12.636789682Z" level=info msg="RemovePodSandbox for \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\"" Aug 13 00:12:12.649215 containerd[1430]: time="2025-08-13T00:12:12.649155027Z" level=info msg="Forcibly stopping sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\"" Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.692 [WARNING][5613] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0", GenerateName:"calico-apiserver-69bf9ff8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"559a4267-56cf-459b-a0e0-15a1cc2cb395", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bf9ff8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"878d3534a3138d676a901fa4d73ff49bda525e0fe198bb7f26482d50ef23e390", Pod:"calico-apiserver-69bf9ff8c6-mp76l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f75d7584ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.692 [INFO][5613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.692 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" iface="eth0" netns="" Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.692 [INFO][5613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.692 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.718 [INFO][5622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.718 [INFO][5622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.718 [INFO][5622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.728 [WARNING][5622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.728 [INFO][5622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" HandleID="k8s-pod-network.7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--mp76l-eth0" Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.731 [INFO][5622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:12.735839 containerd[1430]: 2025-08-13 00:12:12.734 [INFO][5613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3" Aug 13 00:12:12.736309 containerd[1430]: time="2025-08-13T00:12:12.735914583Z" level=info msg="TearDown network for sandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\" successfully" Aug 13 00:12:12.764320 containerd[1430]: time="2025-08-13T00:12:12.764251116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:12:12.764538 containerd[1430]: time="2025-08-13T00:12:12.764430412Z" level=info msg="RemovePodSandbox \"7546ee61e036704ade56402cf49df466b40522f164cff8eda89dd2ae69c57ce3\" returns successfully" Aug 13 00:12:12.765403 containerd[1430]: time="2025-08-13T00:12:12.765106632Z" level=info msg="StopPodSandbox for \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\"" Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.807 [WARNING][5640] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nqv9j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e81ec000-f2d6-44b6-854d-59a730f62e7e", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9", Pod:"csi-node-driver-nqv9j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59a113afdb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.807 [INFO][5640] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.807 [INFO][5640] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" iface="eth0" netns="" Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.807 [INFO][5640] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.807 [INFO][5640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.829 [INFO][5649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.829 [INFO][5649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.829 [INFO][5649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.839 [WARNING][5649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.839 [INFO][5649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.843 [INFO][5649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:12.847310 containerd[1430]: 2025-08-13 00:12:12.845 [INFO][5640] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:12:12.848130 containerd[1430]: time="2025-08-13T00:12:12.847795424Z" level=info msg="TearDown network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\" successfully" Aug 13 00:12:12.848130 containerd[1430]: time="2025-08-13T00:12:12.847827307Z" level=info msg="StopPodSandbox for \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\" returns successfully" Aug 13 00:12:12.848892 containerd[1430]: time="2025-08-13T00:12:12.848859599Z" level=info msg="RemovePodSandbox for \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\"" Aug 13 00:12:12.848968 containerd[1430]: time="2025-08-13T00:12:12.848908843Z" level=info msg="Forcibly stopping sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\"" Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.887 [WARNING][5667] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nqv9j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e81ec000-f2d6-44b6-854d-59a730f62e7e", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc4252e22c42d9a2122e68bdfbc80951cca0488ad0db6d8f33bf29a20f8bece9", Pod:"csi-node-driver-nqv9j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59a113afdb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.887 [INFO][5667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.887 [INFO][5667] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" iface="eth0" netns="" Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.887 [INFO][5667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.887 [INFO][5667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.917 [INFO][5676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.918 [INFO][5676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.918 [INFO][5676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.928 [WARNING][5676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.928 [INFO][5676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" HandleID="k8s-pod-network.1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Workload="localhost-k8s-csi--node--driver--nqv9j-eth0" Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.930 [INFO][5676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:12.935595 containerd[1430]: 2025-08-13 00:12:12.932 [INFO][5667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd" Aug 13 00:12:12.935595 containerd[1430]: time="2025-08-13T00:12:12.935109349Z" level=info msg="TearDown network for sandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\" successfully" Aug 13 00:12:12.939229 containerd[1430]: time="2025-08-13T00:12:12.939169712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:12:12.939566 containerd[1430]: time="2025-08-13T00:12:12.939447496Z" level=info msg="RemovePodSandbox \"1c87c9585af80d8755fb77c04a996f8d98fc4eecc23dcc2ecbf0830dd7006dfd\" returns successfully" Aug 13 00:12:12.939953 containerd[1430]: time="2025-08-13T00:12:12.939929059Z" level=info msg="StopPodSandbox for \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\"" Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:12.980 [WARNING][5696] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0", GenerateName:"calico-kube-controllers-54446d6f8c-", Namespace:"calico-system", SelfLink:"", UID:"94cfa85d-0b82-444c-ba96-be8ce7895a84", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54446d6f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e", Pod:"calico-kube-controllers-54446d6f8c-zdlwk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali15162155475", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:12.980 [INFO][5696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:12.980 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" iface="eth0" netns="" Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:12.980 [INFO][5696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:12.980 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:13.000 [INFO][5705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:13.000 [INFO][5705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:13.000 [INFO][5705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:13.009 [WARNING][5705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:13.009 [INFO][5705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:13.011 [INFO][5705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.015228 containerd[1430]: 2025-08-13 00:12:13.013 [INFO][5696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:12:13.016005 containerd[1430]: time="2025-08-13T00:12:13.015275902Z" level=info msg="TearDown network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\" successfully" Aug 13 00:12:13.016005 containerd[1430]: time="2025-08-13T00:12:13.015300744Z" level=info msg="StopPodSandbox for \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\" returns successfully" Aug 13 00:12:13.016005 containerd[1430]: time="2025-08-13T00:12:13.015807149Z" level=info msg="RemovePodSandbox for \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\"" Aug 13 00:12:13.016005 containerd[1430]: time="2025-08-13T00:12:13.015839592Z" level=info msg="Forcibly stopping sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\"" Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.051 [WARNING][5724] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0", GenerateName:"calico-kube-controllers-54446d6f8c-", Namespace:"calico-system", SelfLink:"", UID:"94cfa85d-0b82-444c-ba96-be8ce7895a84", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54446d6f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73c5cfa3d04dc7346c8889517a08bce06b73836311f1c20e8e344def8175c16e", Pod:"calico-kube-controllers-54446d6f8c-zdlwk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali15162155475", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.052 [INFO][5724] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.052 [INFO][5724] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" iface="eth0" netns="" Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.052 [INFO][5724] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.052 [INFO][5724] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.073 [INFO][5733] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.073 [INFO][5733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.073 [INFO][5733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.084 [WARNING][5733] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.084 [INFO][5733] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" HandleID="k8s-pod-network.d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Workload="localhost-k8s-calico--kube--controllers--54446d6f8c--zdlwk-eth0" Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.085 [INFO][5733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.090406 containerd[1430]: 2025-08-13 00:12:13.087 [INFO][5724] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe" Aug 13 00:12:13.090406 containerd[1430]: time="2025-08-13T00:12:13.089654928Z" level=info msg="TearDown network for sandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\" successfully" Aug 13 00:12:13.092742 containerd[1430]: time="2025-08-13T00:12:13.092699878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:12:13.092847 containerd[1430]: time="2025-08-13T00:12:13.092775484Z" level=info msg="RemovePodSandbox \"d5b519c1d71a3648d718439eaa4da408b2ceea5d0c70cf28be405637469162fe\" returns successfully" Aug 13 00:12:13.093631 containerd[1430]: time="2025-08-13T00:12:13.093604638Z" level=info msg="StopPodSandbox for \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\"" Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.130 [WARNING][5750] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fbdf544e-e157-4095-9b30-e5d9130445c2", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6", Pod:"goldmane-768f4c5c69-tg9t5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14273bb6247", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.130 [INFO][5750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.130 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" iface="eth0" netns="" Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.130 [INFO][5750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.130 [INFO][5750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.149 [INFO][5758] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.149 [INFO][5758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.149 [INFO][5758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.158 [WARNING][5758] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.158 [INFO][5758] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.160 [INFO][5758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.165481 containerd[1430]: 2025-08-13 00:12:13.163 [INFO][5750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:12:13.165481 containerd[1430]: time="2025-08-13T00:12:13.165063885Z" level=info msg="TearDown network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\" successfully" Aug 13 00:12:13.165481 containerd[1430]: time="2025-08-13T00:12:13.165088887Z" level=info msg="StopPodSandbox for \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\" returns successfully" Aug 13 00:12:13.165921 containerd[1430]: time="2025-08-13T00:12:13.165603373Z" level=info msg="RemovePodSandbox for \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\"" Aug 13 00:12:13.165921 containerd[1430]: time="2025-08-13T00:12:13.165636616Z" level=info msg="Forcibly stopping sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\"" Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.201 [WARNING][5775] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fbdf544e-e157-4095-9b30-e5d9130445c2", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"073b6e0706fa650a3efe3bff005c7120320a6c6cf89c432da63effb99eaf89c6", Pod:"goldmane-768f4c5c69-tg9t5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14273bb6247", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.201 [INFO][5775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.201 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" iface="eth0" netns="" Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.201 [INFO][5775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.201 [INFO][5775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.220 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.220 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.220 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.229 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.229 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" HandleID="k8s-pod-network.ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Workload="localhost-k8s-goldmane--768f4c5c69--tg9t5-eth0" Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.231 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.234891 containerd[1430]: 2025-08-13 00:12:13.232 [INFO][5775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094" Aug 13 00:12:13.235285 containerd[1430]: time="2025-08-13T00:12:13.234931792Z" level=info msg="TearDown network for sandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\" successfully" Aug 13 00:12:13.238503 containerd[1430]: time="2025-08-13T00:12:13.238454864Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:12:13.238602 containerd[1430]: time="2025-08-13T00:12:13.238541591Z" level=info msg="RemovePodSandbox \"ffa95b7f0f4867df8052d303a49422399b4fddedc0f287dbd7971ce550033094\" returns successfully" Aug 13 00:12:13.239067 containerd[1430]: time="2025-08-13T00:12:13.239039195Z" level=info msg="StopPodSandbox for \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\"" Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.278 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b6c872d-33f8-4452-b725-41047a59fd6c", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05", Pod:"coredns-674b8bbfcf-f2bhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ac3fcf92d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.278 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.278 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" iface="eth0" netns="" Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.278 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.278 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.303 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.303 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.303 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.312 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.312 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.314 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.317942 containerd[1430]: 2025-08-13 00:12:13.316 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:12:13.318433 containerd[1430]: time="2025-08-13T00:12:13.317981906Z" level=info msg="TearDown network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\" successfully" Aug 13 00:12:13.318433 containerd[1430]: time="2025-08-13T00:12:13.318014068Z" level=info msg="StopPodSandbox for \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\" returns successfully" Aug 13 00:12:13.318951 containerd[1430]: time="2025-08-13T00:12:13.318665806Z" level=info msg="RemovePodSandbox for \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\"" Aug 13 00:12:13.318951 containerd[1430]: time="2025-08-13T00:12:13.318701129Z" level=info msg="Forcibly stopping sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\"" Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.355 [WARNING][5827] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b6c872d-33f8-4452-b725-41047a59fd6c", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ed209ac6941dd562c533b70b3cef2eaffaa99db854aa0aa9b6b4c7fd43f8a05", Pod:"coredns-674b8bbfcf-f2bhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ac3fcf92d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.355 [INFO][5827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.355 [INFO][5827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" iface="eth0" netns="" Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.355 [INFO][5827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.355 [INFO][5827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.374 [INFO][5836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.374 [INFO][5836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.374 [INFO][5836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.383 [WARNING][5836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.383 [INFO][5836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" HandleID="k8s-pod-network.98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Workload="localhost-k8s-coredns--674b8bbfcf--f2bhn-eth0" Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.385 [INFO][5836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.389182 containerd[1430]: 2025-08-13 00:12:13.387 [INFO][5827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a" Aug 13 00:12:13.391145 containerd[1430]: time="2025-08-13T00:12:13.389670333Z" level=info msg="TearDown network for sandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\" successfully" Aug 13 00:12:13.392961 containerd[1430]: time="2025-08-13T00:12:13.392749366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:12:13.392961 containerd[1430]: time="2025-08-13T00:12:13.392848015Z" level=info msg="RemovePodSandbox \"98ef07c8daf73550f87bb48e2f91edaeb4d5461885d8293f76a00f0f7f8c607a\" returns successfully" Aug 13 00:12:13.393549 containerd[1430]: time="2025-08-13T00:12:13.393418505Z" level=info msg="StopPodSandbox for \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\"" Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.431 [WARNING][5854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c774b47c-e08c-42ad-b562-dd791cc0ed35", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7", Pod:"coredns-674b8bbfcf-lh4nv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99504baaeb2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.431 [INFO][5854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.431 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" iface="eth0" netns="" Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.431 [INFO][5854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.431 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.454 [INFO][5863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.454 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.454 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.463 [WARNING][5863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.463 [INFO][5863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.465 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.482665 containerd[1430]: 2025-08-13 00:12:13.467 [INFO][5854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:12:13.484166 containerd[1430]: time="2025-08-13T00:12:13.483406553Z" level=info msg="TearDown network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\" successfully" Aug 13 00:12:13.484166 containerd[1430]: time="2025-08-13T00:12:13.483452077Z" level=info msg="StopPodSandbox for \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\" returns successfully" Aug 13 00:12:13.484458 containerd[1430]: time="2025-08-13T00:12:13.484403522Z" level=info msg="RemovePodSandbox for \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\"" Aug 13 00:12:13.484458 containerd[1430]: time="2025-08-13T00:12:13.484439245Z" level=info msg="Forcibly stopping sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\"" Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.521 [WARNING][5880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c774b47c-e08c-42ad-b562-dd791cc0ed35", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"435fb2088df7756585139862e6916df98cf5ec448d190fc9616dff82b74d87c7", Pod:"coredns-674b8bbfcf-lh4nv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99504baaeb2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.521 [INFO][5880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.521 [INFO][5880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" iface="eth0" netns="" Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.521 [INFO][5880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.521 [INFO][5880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.542 [INFO][5889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.542 [INFO][5889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.542 [INFO][5889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.552 [WARNING][5889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.552 [INFO][5889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" HandleID="k8s-pod-network.a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Workload="localhost-k8s-coredns--674b8bbfcf--lh4nv-eth0" Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.553 [INFO][5889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.557753 containerd[1430]: 2025-08-13 00:12:13.555 [INFO][5880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a" Aug 13 00:12:13.557753 containerd[1430]: time="2025-08-13T00:12:13.557739055Z" level=info msg="TearDown network for sandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\" successfully" Aug 13 00:12:13.560673 containerd[1430]: time="2025-08-13T00:12:13.560630711Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:12:13.560773 containerd[1430]: time="2025-08-13T00:12:13.560715039Z" level=info msg="RemovePodSandbox \"a07d83b7d7a31dff8607e1a3a75607b27c3548e49336d60ff32db1df15f0728a\" returns successfully" Aug 13 00:12:13.561267 containerd[1430]: time="2025-08-13T00:12:13.561235445Z" level=info msg="StopPodSandbox for \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\"" Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.597 [WARNING][5907] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0", GenerateName:"calico-apiserver-69bf9ff8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c192b0-5021-43fd-851e-5152f889105e", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bf9ff8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f", Pod:"calico-apiserver-69bf9ff8c6-ds8gn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc5489654d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.597 [INFO][5907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.597 [INFO][5907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" iface="eth0" netns="" Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.597 [INFO][5907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.597 [INFO][5907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.617 [INFO][5916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.617 [INFO][5916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.617 [INFO][5916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.628 [WARNING][5916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.628 [INFO][5916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.630 [INFO][5916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.636521 containerd[1430]: 2025-08-13 00:12:13.633 [INFO][5907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:12:13.637275 containerd[1430]: time="2025-08-13T00:12:13.636530632Z" level=info msg="TearDown network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\" successfully" Aug 13 00:12:13.637275 containerd[1430]: time="2025-08-13T00:12:13.636558674Z" level=info msg="StopPodSandbox for \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\" returns successfully" Aug 13 00:12:13.637275 containerd[1430]: time="2025-08-13T00:12:13.636988473Z" level=info msg="RemovePodSandbox for \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\"" Aug 13 00:12:13.637275 containerd[1430]: time="2025-08-13T00:12:13.637020955Z" level=info msg="Forcibly stopping sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\"" Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.671 [WARNING][5933] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0", GenerateName:"calico-apiserver-69bf9ff8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c192b0-5021-43fd-851e-5152f889105e", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bf9ff8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fb674fd295e28075ad74eb0c99d8633a21ae38bad20e2285aee4e255730564f", Pod:"calico-apiserver-69bf9ff8c6-ds8gn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc5489654d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.672 [INFO][5933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.672 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" iface="eth0" netns="" Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.672 [INFO][5933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.672 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.696 [INFO][5942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.696 [INFO][5942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.696 [INFO][5942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.709 [WARNING][5942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.709 [INFO][5942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" HandleID="k8s-pod-network.f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Workload="localhost-k8s-calico--apiserver--69bf9ff8c6--ds8gn-eth0" Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.711 [INFO][5942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.715470 containerd[1430]: 2025-08-13 00:12:13.713 [INFO][5933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d" Aug 13 00:12:13.716013 containerd[1430]: time="2025-08-13T00:12:13.715518506Z" level=info msg="TearDown network for sandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\" successfully" Aug 13 00:12:13.730132 containerd[1430]: time="2025-08-13T00:12:13.730044912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:12:13.730132 containerd[1430]: time="2025-08-13T00:12:13.730139641Z" level=info msg="RemovePodSandbox \"f2a90fe644e2859e3588b0441b467575d2c34363b707a253b60117245a4f9e1d\" returns successfully" Aug 13 00:12:13.730740 containerd[1430]: time="2025-08-13T00:12:13.730705651Z" level=info msg="StopPodSandbox for \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\"" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.766 [WARNING][5961] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" WorkloadEndpoint="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.766 [INFO][5961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.766 [INFO][5961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" iface="eth0" netns="" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.766 [INFO][5961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.766 [INFO][5961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.789 [INFO][5969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.790 [INFO][5969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.790 [INFO][5969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.801 [WARNING][5969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.801 [INFO][5969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.803 [INFO][5969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.807534 containerd[1430]: 2025-08-13 00:12:13.805 [INFO][5961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:12:13.807534 containerd[1430]: time="2025-08-13T00:12:13.807424204Z" level=info msg="TearDown network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\" successfully" Aug 13 00:12:13.807534 containerd[1430]: time="2025-08-13T00:12:13.807457287Z" level=info msg="StopPodSandbox for \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\" returns successfully" Aug 13 00:12:13.808652 containerd[1430]: time="2025-08-13T00:12:13.808609029Z" level=info msg="RemovePodSandbox for \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\"" Aug 13 00:12:13.809036 containerd[1430]: time="2025-08-13T00:12:13.808759362Z" level=info msg="Forcibly stopping sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\"" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.843 [WARNING][5986] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" WorkloadEndpoint="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.844 [INFO][5986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.844 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" iface="eth0" netns="" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.844 [INFO][5986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.844 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.864 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.865 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.865 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.879 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.879 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" HandleID="k8s-pod-network.5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Workload="localhost-k8s-whisker--754c986cf8--jmd7h-eth0" Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.881 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:12:13.885318 containerd[1430]: 2025-08-13 00:12:13.883 [INFO][5986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960" Aug 13 00:12:13.885997 containerd[1430]: time="2025-08-13T00:12:13.885773022Z" level=info msg="TearDown network for sandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\" successfully" Aug 13 00:12:13.896153 containerd[1430]: time="2025-08-13T00:12:13.896067213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:12:13.896532 containerd[1430]: time="2025-08-13T00:12:13.896393642Z" level=info msg="RemovePodSandbox \"5256cf9d6e81cdf0bc2d1ab1a398cba028488c4a6977901c71196162471af960\" returns successfully" Aug 13 00:12:16.681987 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:33346.service - OpenSSH per-connection server daemon (10.0.0.1:33346). Aug 13 00:12:16.733279 sshd[6005]: Accepted publickey for core from 10.0.0.1 port 33346 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:16.735209 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:16.742145 systemd-logind[1418]: New session 14 of user core. Aug 13 00:12:16.752640 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:12:16.951294 sshd[6005]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:16.958297 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:33346.service: Deactivated successfully. Aug 13 00:12:16.962207 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:12:16.962963 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:12:16.963947 systemd-logind[1418]: Removed session 14. Aug 13 00:12:21.966184 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:33356.service - OpenSSH per-connection server daemon (10.0.0.1:33356). Aug 13 00:12:22.007998 sshd[6021]: Accepted publickey for core from 10.0.0.1 port 33356 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:22.009393 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:22.013415 systemd-logind[1418]: New session 15 of user core. Aug 13 00:12:22.026553 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:12:22.202195 sshd[6021]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:22.205738 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:33356.service: Deactivated successfully. Aug 13 00:12:22.207830 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:12:22.209665 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:12:22.210557 systemd-logind[1418]: Removed session 15. Aug 13 00:12:24.554848 kubelet[2484]: E0813 00:12:24.554808 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:12:27.213885 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:59130.service - OpenSSH per-connection server daemon (10.0.0.1:59130). Aug 13 00:12:27.259249 sshd[6080]: Accepted publickey for core from 10.0.0.1 port 59130 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:27.260899 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:27.266631 systemd-logind[1418]: New session 16 of user core. Aug 13 00:12:27.273585 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:12:27.471743 sshd[6080]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:27.487196 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:59130.service: Deactivated successfully. Aug 13 00:12:27.489887 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:12:27.492334 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:12:27.495155 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:59138.service - OpenSSH per-connection server daemon (10.0.0.1:59138). Aug 13 00:12:27.496764 systemd-logind[1418]: Removed session 16. Aug 13 00:12:27.550664 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 59138 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:27.552075 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:27.557376 systemd-logind[1418]: New session 17 of user core. Aug 13 00:12:27.577739 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:12:27.838746 sshd[6095]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:27.846566 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:59138.service: Deactivated successfully. Aug 13 00:12:27.849604 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:12:27.851393 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:12:27.864535 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:59154.service - OpenSSH per-connection server daemon (10.0.0.1:59154). Aug 13 00:12:27.867340 systemd-logind[1418]: Removed session 17. Aug 13 00:12:27.913453 sshd[6107]: Accepted publickey for core from 10.0.0.1 port 59154 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:27.915196 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:27.923423 systemd-logind[1418]: New session 18 of user core. Aug 13 00:12:27.927690 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:12:28.727667 sshd[6107]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:28.735394 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:59154.service: Deactivated successfully. Aug 13 00:12:28.741107 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:12:28.742498 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:12:28.752479 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:59158.service - OpenSSH per-connection server daemon (10.0.0.1:59158). Aug 13 00:12:28.754665 systemd-logind[1418]: Removed session 18. Aug 13 00:12:28.790571 sshd[6133]: Accepted publickey for core from 10.0.0.1 port 59158 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:28.792360 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:28.797136 systemd-logind[1418]: New session 19 of user core. Aug 13 00:12:28.810699 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:12:29.316859 sshd[6133]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:29.327403 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:59158.service: Deactivated successfully. Aug 13 00:12:29.330328 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:12:29.332257 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:12:29.342219 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:59166.service - OpenSSH per-connection server daemon (10.0.0.1:59166). Aug 13 00:12:29.344038 systemd-logind[1418]: Removed session 19. Aug 13 00:12:29.382972 sshd[6146]: Accepted publickey for core from 10.0.0.1 port 59166 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:29.384821 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:29.389677 systemd-logind[1418]: New session 20 of user core. Aug 13 00:12:29.399572 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:12:29.605654 sshd[6146]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:29.612055 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:59166.service: Deactivated successfully. Aug 13 00:12:29.615007 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:12:29.615948 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:12:29.617524 systemd-logind[1418]: Removed session 20. Aug 13 00:12:34.620214 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:50890.service - OpenSSH per-connection server daemon (10.0.0.1:50890). Aug 13 00:12:34.669793 sshd[6163]: Accepted publickey for core from 10.0.0.1 port 50890 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:34.671536 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:34.676461 systemd-logind[1418]: New session 21 of user core. Aug 13 00:12:34.684605 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:12:35.128894 sshd[6163]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:35.133232 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:50890.service: Deactivated successfully. Aug 13 00:12:35.135297 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:12:35.136690 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:12:35.137610 systemd-logind[1418]: Removed session 21. Aug 13 00:12:36.110847 systemd[1]: run-containerd-runc-k8s.io-bdd390f20620cd249ea32afbdef69475bda009051481763c409a4be9fe3326fd-runc.XRkaRb.mount: Deactivated successfully. Aug 13 00:12:36.555023 kubelet[2484]: E0813 00:12:36.554939 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:12:40.144275 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:50898.service - OpenSSH per-connection server daemon (10.0.0.1:50898). Aug 13 00:12:40.178464 sshd[6224]: Accepted publickey for core from 10.0.0.1 port 50898 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:12:40.180170 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:12:40.184039 systemd-logind[1418]: New session 22 of user core. Aug 13 00:12:40.191503 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:12:40.349924 sshd[6224]: pam_unix(sshd:session): session closed for user core Aug 13 00:12:40.353168 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:50898.service: Deactivated successfully. Aug 13 00:12:40.357053 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:12:40.357742 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:12:40.358679 systemd-logind[1418]: Removed session 22. Aug 13 00:12:40.554924 kubelet[2484]: E0813 00:12:40.554809 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"