Jul 11 00:15:25.880107 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:15:25.880128 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Jul 10 22:41:52 -00 2025 Jul 11 00:15:25.880138 kernel: KASLR enabled Jul 11 00:15:25.880144 kernel: efi: EFI v2.7 by EDK II Jul 11 00:15:25.880150 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 11 00:15:25.880155 kernel: random: crng init done Jul 11 00:15:25.880162 kernel: ACPI: Early table checksum verification disabled Jul 11 00:15:25.880169 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 11 00:15:25.880175 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:15:25.880182 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880188 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880194 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880200 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880206 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880214 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880222 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880229 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880235 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:15:25.880242 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:15:25.880248 kernel: NUMA: Failed to initialise from firmware Jul 11 00:15:25.880255 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:15:25.880261 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 11 00:15:25.880267 kernel: Zone ranges: Jul 11 00:15:25.880274 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:15:25.880280 kernel: DMA32 empty Jul 11 00:15:25.880287 kernel: Normal empty Jul 11 00:15:25.880294 kernel: Movable zone start for each node Jul 11 00:15:25.880300 kernel: Early memory node ranges Jul 11 00:15:25.880306 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 11 00:15:25.880313 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 11 00:15:25.880319 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 11 00:15:25.880326 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 00:15:25.880332 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 00:15:25.880338 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 00:15:25.880345 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 00:15:25.880351 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:15:25.880358 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:15:25.880365 kernel: psci: probing for conduit method from ACPI. Jul 11 00:15:25.880372 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:15:25.880378 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:15:25.880388 kernel: psci: Trusted OS migration not required Jul 11 00:15:25.880395 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:15:25.880402 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:15:25.880410 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 11 00:15:25.880417 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 11 00:15:25.880424 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:15:25.880430 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:15:25.880437 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:15:25.880444 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:15:25.880451 kernel: CPU features: detected: Spectre-v4 Jul 11 00:15:25.880457 kernel: CPU features: detected: Spectre-BHB Jul 11 00:15:25.880464 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:15:25.880471 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:15:25.880479 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:15:25.880486 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:15:25.880493 kernel: alternatives: applying boot alternatives Jul 11 00:15:25.880500 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:15:25.880508 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:15:25.880515 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:15:25.880521 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:15:25.880528 kernel: Fallback order for Node 0: 0 Jul 11 00:15:25.880535 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:15:25.880542 kernel: Policy zone: DMA Jul 11 00:15:25.880548 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:15:25.880556 kernel: software IO TLB: area num 4. Jul 11 00:15:25.880563 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 11 00:15:25.880570 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 11 00:15:25.880577 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:15:25.880584 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:15:25.880591 kernel: rcu: RCU event tracing is enabled. Jul 11 00:15:25.880598 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:15:25.880608 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:15:25.880615 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:15:25.880622 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:15:25.880629 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:15:25.880635 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:15:25.880658 kernel: GICv3: 256 SPIs implemented Jul 11 00:15:25.880665 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:15:25.880672 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:15:25.880679 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 00:15:25.880686 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:15:25.880692 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:15:25.880699 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:15:25.880706 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:15:25.880713 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 11 00:15:25.880720 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 11 00:15:25.880727 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:15:25.880735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:15:25.880742 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:15:25.880749 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:15:25.880756 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:15:25.880762 kernel: arm-pv: using stolen time PV Jul 11 00:15:25.880769 kernel: Console: colour dummy device 80x25 Jul 11 00:15:25.880776 kernel: ACPI: Core revision 20230628 Jul 11 00:15:25.880783 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:15:25.880790 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:15:25.880797 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:15:25.880805 kernel: landlock: Up and running. Jul 11 00:15:25.880812 kernel: SELinux: Initializing. Jul 11 00:15:25.880819 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:15:25.880825 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:15:25.880832 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:15:25.880839 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:15:25.880846 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:15:25.880860 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:15:25.880867 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:15:25.880894 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:15:25.880901 kernel: Remapping and enabling EFI services. Jul 11 00:15:25.880908 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:15:25.880915 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:15:25.880921 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:15:25.880928 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 11 00:15:25.880935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:15:25.880942 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:15:25.880949 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:15:25.880956 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:15:25.880965 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 11 00:15:25.880972 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:15:25.880982 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:15:25.880991 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:15:25.880998 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:15:25.881005 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 11 00:15:25.881012 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:15:25.881019 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:15:25.881026 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:15:25.881034 kernel: SMP: Total of 4 processors activated. Jul 11 00:15:25.881042 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:15:25.881049 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:15:25.881056 kernel: CPU features: detected: Common not Private translations Jul 11 00:15:25.881063 kernel: CPU features: detected: CRC32 instructions Jul 11 00:15:25.881070 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 00:15:25.881077 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:15:25.881085 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:15:25.881093 kernel: CPU features: detected: Privileged Access Never Jul 11 00:15:25.881100 kernel: CPU features: detected: RAS Extension Support Jul 11 00:15:25.881107 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:15:25.881114 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:15:25.881122 kernel: alternatives: applying system-wide alternatives Jul 11 00:15:25.881129 kernel: devtmpfs: initialized Jul 11 00:15:25.881136 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:15:25.881143 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:15:25.881150 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:15:25.881159 kernel: SMBIOS 3.0.0 present. Jul 11 00:15:25.881166 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 11 00:15:25.881173 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:15:25.881180 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:15:25.881188 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:15:25.881195 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:15:25.881202 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:15:25.881209 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Jul 11 00:15:25.881216 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:15:25.881225 kernel: cpuidle: using governor menu Jul 11 00:15:25.881232 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:15:25.881239 kernel: ASID allocator initialised with 32768 entries Jul 11 00:15:25.881246 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:15:25.881253 kernel: Serial: AMBA PL011 UART driver Jul 11 00:15:25.881260 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 00:15:25.881268 kernel: Modules: 0 pages in range for non-PLT usage Jul 11 00:15:25.881275 kernel: Modules: 509008 pages in range for PLT usage Jul 11 00:15:25.881282 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:15:25.881290 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:15:25.881297 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:15:25.881305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 00:15:25.881312 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:15:25.881319 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:15:25.881326 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:15:25.881333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 00:15:25.881340 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:15:25.881347 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:15:25.881361 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:15:25.881370 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:15:25.881385 kernel: ACPI: Interpreter enabled Jul 11 00:15:25.881393 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:15:25.881400 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:15:25.881407 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:15:25.881414 kernel: printk: console [ttyAMA0] enabled Jul 11 00:15:25.881422 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:15:25.881549 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:15:25.881623 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:15:25.881686 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:15:25.881748 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:15:25.881823 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:15:25.881833 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:15:25.881840 kernel: PCI host bridge to bus 0000:00 Jul 11 00:15:25.881985 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:15:25.882050 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:15:25.882107 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:15:25.882162 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:15:25.882240 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:15:25.882314 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:15:25.882380 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:15:25.882447 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:15:25.882512 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:15:25.882576 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:15:25.882640 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:15:25.882703 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:15:25.882760 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:15:25.882815 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:15:25.882891 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:15:25.882902 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:15:25.882909 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:15:25.882916 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:15:25.882923 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:15:25.882931 kernel: iommu: Default domain type: Translated Jul 11 00:15:25.882938 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:15:25.882945 kernel: efivars: Registered efivars operations Jul 11 00:15:25.882954 kernel: vgaarb: loaded Jul 11 00:15:25.882962 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:15:25.882969 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:15:25.882976 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:15:25.882983 kernel: pnp: PnP ACPI init Jul 11 00:15:25.883053 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:15:25.883064 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:15:25.883071 kernel: NET: Registered PF_INET protocol family Jul 11 00:15:25.883078 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:15:25.883088 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:15:25.883095 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:15:25.883102 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:15:25.883110 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:15:25.883117 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:15:25.883125 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:15:25.883132 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:15:25.883139 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:15:25.883147 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:15:25.883155 kernel: kvm [1]: HYP mode not available Jul 11 00:15:25.883162 kernel: Initialise system trusted keyrings Jul 11 00:15:25.883169 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:15:25.883177 kernel: Key type asymmetric registered Jul 11 00:15:25.883184 kernel: Asymmetric key parser 'x509' registered Jul 11 00:15:25.883191 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:15:25.883198 kernel: io scheduler mq-deadline registered Jul 11 00:15:25.883205 kernel: io scheduler kyber registered Jul 11 00:15:25.883212 kernel: io scheduler bfq registered Jul 11 00:15:25.883221 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:15:25.883228 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:15:25.883236 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:15:25.883300 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:15:25.883310 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:15:25.883317 kernel: thunder_xcv, ver 1.0 Jul 11 00:15:25.883325 kernel: thunder_bgx, ver 1.0 Jul 11 00:15:25.883332 kernel: nicpf, ver 1.0 Jul 11 00:15:25.883339 kernel: nicvf, ver 1.0 Jul 11 00:15:25.883430 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:15:25.883493 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:15:25 UTC (1752192925) Jul 11 00:15:25.883503 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:15:25.883510 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:15:25.883517 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 11 00:15:25.883525 kernel: watchdog: Hard watchdog permanently disabled Jul 11 00:15:25.883532 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:15:25.883539 kernel: Segment Routing with IPv6 Jul 11 00:15:25.883548 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:15:25.883556 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:15:25.883563 kernel: Key type dns_resolver registered Jul 11 00:15:25.883570 kernel: registered taskstats version 1 Jul 11 00:15:25.883577 kernel: Loading compiled-in X.509 certificates Jul 11 00:15:25.883584 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 9d58afa0c1753353480d5539f26f662c9ce000cb' Jul 11 00:15:25.883591 kernel: Key type .fscrypt registered Jul 11 00:15:25.883598 kernel: Key type fscrypt-provisioning registered Jul 11 00:15:25.883605 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:15:25.883614 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:15:25.883621 kernel: ima: No architecture policies found Jul 11 00:15:25.883628 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:15:25.883635 kernel: clk: Disabling unused clocks Jul 11 00:15:25.883642 kernel: Freeing unused kernel memory: 39424K Jul 11 00:15:25.883649 kernel: Run /init as init process Jul 11 00:15:25.883656 kernel: with arguments: Jul 11 00:15:25.883663 kernel: /init Jul 11 00:15:25.883670 kernel: with environment: Jul 11 00:15:25.883678 kernel: HOME=/ Jul 11 00:15:25.883685 kernel: TERM=linux Jul 11 00:15:25.883692 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:15:25.883701 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:15:25.883710 systemd[1]: Detected virtualization kvm. Jul 11 00:15:25.883718 systemd[1]: Detected architecture arm64. Jul 11 00:15:25.883725 systemd[1]: Running in initrd. Jul 11 00:15:25.883734 systemd[1]: No hostname configured, using default hostname. Jul 11 00:15:25.883742 systemd[1]: Hostname set to . Jul 11 00:15:25.883750 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:15:25.883757 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:15:25.883765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:15:25.883773 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:15:25.883781 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:15:25.883789 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:15:25.883797 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:15:25.883805 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:15:25.883815 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:15:25.883822 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:15:25.883830 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:15:25.883838 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:15:25.883845 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:15:25.883861 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:15:25.883869 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:15:25.883884 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:15:25.883902 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:15:25.883910 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:15:25.883918 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:15:25.883926 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:15:25.883933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:15:25.883941 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:15:25.883951 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:15:25.883958 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:15:25.883966 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:15:25.883974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:15:25.883981 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:15:25.883989 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:15:25.883997 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:15:25.884004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:15:25.884013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:15:25.884021 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:15:25.884029 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:15:25.884036 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:15:25.884045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:15:25.884054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:15:25.884082 systemd-journald[237]: Collecting audit messages is disabled. Jul 11 00:15:25.884101 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:15:25.884109 systemd-journald[237]: Journal started Jul 11 00:15:25.884129 systemd-journald[237]: Runtime Journal (/run/log/journal/10716aa21f2a46b8822367f9957f6e79) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:15:25.877344 systemd-modules-load[239]: Inserted module 'overlay' Jul 11 00:15:25.885513 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:15:25.887968 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:15:25.889891 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:15:25.892008 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:15:25.895945 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:15:25.896902 kernel: Bridge firewalling registered Jul 11 00:15:25.896788 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 11 00:15:25.897955 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:15:25.899994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:15:25.902946 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:15:25.903986 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:15:25.907111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:15:25.909799 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:15:25.910664 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:15:25.913324 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:15:25.922068 dracut-cmdline[273]: dracut-dracut-053 Jul 11 00:15:25.924319 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:15:25.943385 systemd-resolved[276]: Positive Trust Anchors: Jul 11 00:15:25.943399 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:15:25.943429 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:15:25.947980 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 11 00:15:25.949831 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:15:25.950979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:15:25.990903 kernel: SCSI subsystem initialized Jul 11 00:15:25.994889 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:15:26.002907 kernel: iscsi: registered transport (tcp) Jul 11 00:15:26.014950 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:15:26.014977 kernel: QLogic iSCSI HBA Driver Jul 11 00:15:26.052574 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:15:26.066000 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:15:26.082174 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:15:26.082225 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:15:26.082961 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:15:26.129901 kernel: raid6: neonx8 gen() 7417 MB/s Jul 11 00:15:26.147899 kernel: raid6: neonx4 gen() 10990 MB/s Jul 11 00:15:26.164887 kernel: raid6: neonx2 gen() 12471 MB/s Jul 11 00:15:26.181893 kernel: raid6: neonx1 gen() 10139 MB/s Jul 11 00:15:26.198901 kernel: raid6: int64x8 gen() 6748 MB/s Jul 11 00:15:26.215893 kernel: raid6: int64x4 gen() 7120 MB/s Jul 11 00:15:26.232905 kernel: raid6: int64x2 gen() 5928 MB/s Jul 11 00:15:26.249899 kernel: raid6: int64x1 gen() 5056 MB/s Jul 11 00:15:26.249931 kernel: raid6: using algorithm neonx2 gen() 12471 MB/s Jul 11 00:15:26.266904 kernel: raid6: .... xor() 10194 MB/s, rmw enabled Jul 11 00:15:26.266927 kernel: raid6: using neon recovery algorithm Jul 11 00:15:26.271898 kernel: xor: measuring software checksum speed Jul 11 00:15:26.271923 kernel: 8regs : 19702 MB/sec Jul 11 00:15:26.273289 kernel: 32regs : 17289 MB/sec Jul 11 00:15:26.273301 kernel: arm64_neon : 27132 MB/sec Jul 11 00:15:26.273310 kernel: xor: using function: arm64_neon (27132 MB/sec) Jul 11 00:15:26.324891 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:15:26.334926 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:15:26.348084 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:15:26.358316 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 11 00:15:26.361383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:15:26.363624 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:15:26.377154 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 11 00:15:26.400814 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:15:26.412992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:15:26.452540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:15:26.462354 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:15:26.473359 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:15:26.475121 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:15:26.476794 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:15:26.478819 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:15:26.488910 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:15:26.491015 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 00:15:26.491143 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:15:26.493962 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:15:26.493993 kernel: GPT:9289727 != 19775487 Jul 11 00:15:26.494003 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:15:26.495151 kernel: GPT:9289727 != 19775487 Jul 11 00:15:26.495173 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:15:26.495184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:15:26.497770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:15:26.497899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:15:26.501647 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:15:26.505119 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:15:26.505251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:15:26.506888 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:15:26.514287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:15:26.517843 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (510) Jul 11 00:15:26.517414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:15:26.522976 kernel: BTRFS: device fsid f5d5cad7-cb7a-4b07-bec7-847b84711ad7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (523) Jul 11 00:15:26.528960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:15:26.534307 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:15:26.543708 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:15:26.547861 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:15:26.551349 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:15:26.552241 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:15:26.573014 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:15:26.574414 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:15:26.578835 disk-uuid[550]: Primary Header is updated. Jul 11 00:15:26.578835 disk-uuid[550]: Secondary Entries is updated. Jul 11 00:15:26.578835 disk-uuid[550]: Secondary Header is updated. Jul 11 00:15:26.581411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:15:26.594011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:15:27.594894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:15:27.595135 disk-uuid[551]: The operation has completed successfully. Jul 11 00:15:27.617330 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:15:27.617416 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:15:27.633046 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:15:27.635902 sh[572]: Success Jul 11 00:15:27.650300 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:15:27.677072 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:15:27.690153 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:15:27.691933 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:15:27.701321 kernel: BTRFS info (device dm-0): first mount of filesystem f5d5cad7-cb7a-4b07-bec7-847b84711ad7 Jul 11 00:15:27.701354 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:15:27.701365 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:15:27.703063 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:15:27.703079 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:15:27.706486 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:15:27.707486 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:15:27.716067 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:15:27.717316 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:15:27.723389 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:15:27.723420 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:15:27.723895 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:15:27.725900 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:15:27.732123 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:15:27.733515 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:15:27.738899 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:15:27.745037 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:15:27.806089 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:15:27.815074 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:15:27.846415 systemd-networkd[761]: lo: Link UP Jul 11 00:15:27.846425 systemd-networkd[761]: lo: Gained carrier Jul 11 00:15:27.847110 systemd-networkd[761]: Enumeration completed Jul 11 00:15:27.847377 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:15:27.847548 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:15:27.847551 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:15:27.848433 systemd-networkd[761]: eth0: Link UP Jul 11 00:15:27.848436 systemd-networkd[761]: eth0: Gained carrier Jul 11 00:15:27.848443 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:15:27.849340 systemd[1]: Reached target network.target - Network. Jul 11 00:15:27.860159 ignition[661]: Ignition 2.19.0 Jul 11 00:15:27.860172 ignition[661]: Stage: fetch-offline Jul 11 00:15:27.860204 ignition[661]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:15:27.860213 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:15:27.861919 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:15:27.860361 ignition[661]: parsed url from cmdline: "" Jul 11 00:15:27.860364 ignition[661]: no config URL provided Jul 11 00:15:27.860368 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:15:27.860375 ignition[661]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:15:27.860397 ignition[661]: op(1): [started] loading QEMU firmware config module Jul 11 00:15:27.860402 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:15:27.869731 ignition[661]: op(1): [finished] loading QEMU firmware config module Jul 11 00:15:27.907064 ignition[661]: parsing config with SHA512: 8ddfec78a8bdbb5b81502ee635f2bc2c77ae0e437597b76acbedf972e7aa38ece7e0f83b0df567ead59d21d54fff38a97a511ab8d0ae66a1b87317932e121bc3 Jul 11 00:15:27.910800 unknown[661]: fetched base config from "system" Jul 11 00:15:27.910810 unknown[661]: fetched user config from "qemu" Jul 11 00:15:27.911227 ignition[661]: fetch-offline: fetch-offline passed Jul 11 00:15:27.911290 ignition[661]: Ignition finished successfully Jul 11 00:15:27.913155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:15:27.914112 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:15:27.923004 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:15:27.933007 ignition[772]: Ignition 2.19.0 Jul 11 00:15:27.933016 ignition[772]: Stage: kargs Jul 11 00:15:27.933176 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:15:27.933185 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:15:27.934036 ignition[772]: kargs: kargs passed Jul 11 00:15:27.934078 ignition[772]: Ignition finished successfully Jul 11 00:15:27.937573 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:15:27.947065 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:15:27.955748 ignition[781]: Ignition 2.19.0 Jul 11 00:15:27.955758 ignition[781]: Stage: disks Jul 11 00:15:27.955936 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:15:27.955946 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:15:27.956764 ignition[781]: disks: disks passed Jul 11 00:15:27.958540 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:15:27.956804 ignition[781]: Ignition finished successfully Jul 11 00:15:27.959511 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:15:27.960501 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:15:27.961937 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:15:27.963143 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:15:27.964448 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:15:27.981059 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:15:27.989634 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:15:27.993370 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:15:28.007977 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:15:28.049892 kernel: EXT4-fs (vda9): mounted filesystem a2a437d1-0a8e-46b9-88bf-4a47ff29fe90 r/w with ordered data mode. Quota mode: none. Jul 11 00:15:28.050076 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:15:28.051098 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:15:28.070974 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:15:28.072360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:15:28.073317 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:15:28.073395 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:15:28.073418 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:15:28.079247 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (799) Jul 11 00:15:28.078715 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:15:28.081030 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:15:28.084952 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:15:28.084973 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:15:28.084984 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:15:28.084999 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:15:28.086733 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:15:28.126943 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:15:28.131056 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:15:28.135021 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:15:28.138295 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:15:28.202173 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:15:28.209024 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:15:28.211116 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:15:28.215887 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:15:28.227556 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:15:28.232630 ignition[913]: INFO : Ignition 2.19.0 Jul 11 00:15:28.232630 ignition[913]: INFO : Stage: mount Jul 11 00:15:28.232630 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:15:28.232630 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:15:28.232630 ignition[913]: INFO : mount: mount passed Jul 11 00:15:28.232630 ignition[913]: INFO : Ignition finished successfully Jul 11 00:15:28.233998 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:15:28.245961 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:15:28.700933 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:15:28.713077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:15:28.718585 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (926) Jul 11 00:15:28.718616 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:15:28.718628 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:15:28.719225 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:15:28.721904 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:15:28.722659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:15:28.738438 ignition[943]: INFO : Ignition 2.19.0 Jul 11 00:15:28.738438 ignition[943]: INFO : Stage: files Jul 11 00:15:28.739649 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:15:28.739649 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:15:28.739649 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:15:28.744274 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:15:28.744274 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:15:28.744274 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:15:28.744274 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:15:28.744274 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:15:28.744274 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 11 00:15:28.744274 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 11 00:15:28.742770 unknown[943]: wrote ssh authorized keys file for user: core Jul 11 00:15:28.802564 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:15:28.981153 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 11 00:15:28.981153 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:15:28.984780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 11 00:15:29.455182 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:15:29.679013 systemd-networkd[761]: eth0: Gained IPv6LL Jul 11 00:15:29.984026 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 00:15:29.986104 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:15:30.016045 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:15:30.019378 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:15:30.020667 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:15:30.020667 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:15:30.020667 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:15:30.020667 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:15:30.020667 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:15:30.020667 ignition[943]: INFO : files: files passed Jul 11 00:15:30.020667 ignition[943]: INFO : Ignition finished successfully Jul 11 00:15:30.021502 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:15:30.032044 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:15:30.034196 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:15:30.035515 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:15:30.035585 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:15:30.041993 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:15:30.045040 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:15:30.045040 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:15:30.047703 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:15:30.048586 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:15:30.051001 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:15:30.065093 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:15:30.083224 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:15:30.083319 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:15:30.085046 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:15:30.086505 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:15:30.087883 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:15:30.088515 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:15:30.104184 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:15:30.111004 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:15:30.118257 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:15:30.119282 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:15:30.120933 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:15:30.122319 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:15:30.122425 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:15:30.124370 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:15:30.125890 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:15:30.127155 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:15:30.128573 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:15:30.130175 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:15:30.131703 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:15:30.133252 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:15:30.134770 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:15:30.136358 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:15:30.137684 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:15:30.138906 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:15:30.139015 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:15:30.140942 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:15:30.142492 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:15:30.144060 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:15:30.147957 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:15:30.149960 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:15:30.150072 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:15:30.152248 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:15:30.152358 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:15:30.154057 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:15:30.155490 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:15:30.156823 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:15:30.158766 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:15:30.159481 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:15:30.160687 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:15:30.160777 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:15:30.161825 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:15:30.161922 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:15:30.162995 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:15:30.163099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:15:30.164340 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:15:30.164433 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:15:30.177024 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:15:30.177653 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:15:30.177765 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:15:30.180206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:15:30.181102 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:15:30.181229 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:15:30.182720 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:15:30.182833 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:15:30.191134 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:15:30.191595 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:15:30.191675 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:15:30.195757 ignition[998]: INFO : Ignition 2.19.0 Jul 11 00:15:30.197043 ignition[998]: INFO : Stage: umount Jul 11 00:15:30.197921 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:15:30.198924 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:15:30.200669 ignition[998]: INFO : umount: umount passed Jul 11 00:15:30.200669 ignition[998]: INFO : Ignition finished successfully Jul 11 00:15:30.205355 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:15:30.205465 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:15:30.206521 systemd[1]: Stopped target network.target - Network. Jul 11 00:15:30.207567 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:15:30.207618 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:15:30.209663 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:15:30.209704 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:15:30.210937 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:15:30.210975 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:15:30.212305 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:15:30.212343 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:15:30.213924 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:15:30.215452 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:15:30.222914 systemd-networkd[761]: eth0: DHCPv6 lease lost Jul 11 00:15:30.224258 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:15:30.224372 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:15:30.226117 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:15:30.226148 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:15:30.234030 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:15:30.234712 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:15:30.234759 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:15:30.236400 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:15:30.238571 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:15:30.238659 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:15:30.242983 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:15:30.243061 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:15:30.244126 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:15:30.244166 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:15:30.245565 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:15:30.245602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:15:30.248552 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:15:30.248714 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:15:30.251085 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:15:30.251164 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:15:30.254050 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:15:30.254094 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:15:30.254965 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:15:30.254993 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:15:30.256582 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:15:30.256625 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:15:30.258870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:15:30.258927 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:15:30.261049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:15:30.261091 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:15:30.269022 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:15:30.269848 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:15:30.269912 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:15:30.271605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:15:30.271644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:15:30.273370 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:15:30.274919 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:15:30.276256 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:15:30.276337 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:15:30.278233 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:15:30.279712 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:15:30.279760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:15:30.281885 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:15:30.290521 systemd[1]: Switching root. Jul 11 00:15:30.321586 systemd-journald[237]: Journal stopped Jul 11 00:15:30.997273 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 11 00:15:30.997326 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:15:30.997338 kernel: SELinux: policy capability open_perms=1 Jul 11 00:15:30.997348 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:15:30.997360 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:15:30.997377 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:15:30.997387 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:15:30.997396 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:15:30.997406 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:15:30.997416 kernel: audit: type=1403 audit(1752192930.462:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:15:30.997426 systemd[1]: Successfully loaded SELinux policy in 28.948ms. Jul 11 00:15:30.997446 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.286ms. Jul 11 00:15:30.997458 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:15:30.997469 systemd[1]: Detected virtualization kvm. Jul 11 00:15:30.997481 systemd[1]: Detected architecture arm64. Jul 11 00:15:30.997491 systemd[1]: Detected first boot. Jul 11 00:15:30.997501 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:15:30.997511 zram_generator::config[1044]: No configuration found. Jul 11 00:15:30.997522 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:15:30.997532 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:15:30.997543 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:15:30.997553 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:15:30.997565 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:15:30.997576 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:15:30.997586 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:15:30.997597 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:15:30.997608 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:15:30.997619 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:15:30.997629 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:15:30.997641 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:15:30.997653 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:15:30.997665 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:15:30.997676 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:15:30.997686 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:15:30.997696 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:15:30.997707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:15:30.997717 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 00:15:30.997731 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:15:30.997741 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:15:30.997754 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:15:30.997765 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:15:30.997776 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:15:30.997786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:15:30.997797 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:15:30.997808 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:15:30.997818 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:15:30.997830 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:15:30.997850 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:15:30.997862 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:15:30.997896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:15:30.997910 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:15:30.997922 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:15:30.997932 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:15:30.997942 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:15:30.997953 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:15:30.997963 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:15:30.997976 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:15:30.997987 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:15:30.997999 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:15:30.998009 systemd[1]: Reached target machines.target - Containers. Jul 11 00:15:30.998020 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:15:30.998031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:15:30.998041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:15:30.998052 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:15:30.998063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:15:30.998075 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:15:30.998086 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:15:30.998099 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:15:30.998110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:15:30.998121 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:15:30.998131 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:15:30.998141 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:15:30.998152 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:15:30.998163 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:15:30.998174 kernel: fuse: init (API version 7.39) Jul 11 00:15:30.998184 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:15:30.998195 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:15:30.998205 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:15:30.998215 kernel: loop: module loaded Jul 11 00:15:30.998225 kernel: ACPI: bus type drm_connector registered Jul 11 00:15:30.998235 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:15:30.998246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:15:30.998258 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:15:30.998268 systemd[1]: Stopped verity-setup.service. Jul 11 00:15:30.998278 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:15:30.998304 systemd-journald[1111]: Collecting audit messages is disabled. Jul 11 00:15:30.998326 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:15:30.998336 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:15:30.998348 systemd-journald[1111]: Journal started Jul 11 00:15:30.998370 systemd-journald[1111]: Runtime Journal (/run/log/journal/10716aa21f2a46b8822367f9957f6e79) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:15:30.810176 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:15:30.831674 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:15:30.832023 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:15:31.001052 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:15:31.001632 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:15:31.002574 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:15:31.003528 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:15:31.004575 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:15:31.006910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:15:31.008094 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:15:31.008232 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:15:31.009401 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:15:31.009547 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:15:31.010715 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:15:31.010935 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:15:31.011990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:15:31.012128 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:15:31.013463 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:15:31.013604 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:15:31.014714 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:15:31.014868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:15:31.015972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:15:31.017070 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:15:31.018396 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:15:31.031807 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:15:31.047123 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:15:31.048981 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:15:31.049770 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:15:31.049810 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:15:31.051553 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:15:31.053493 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:15:31.055312 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:15:31.056165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:15:31.057561 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:15:31.059180 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:15:31.060084 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:15:31.063059 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:15:31.064005 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:15:31.065049 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:15:31.069707 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:15:31.074159 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:15:31.075584 systemd-journald[1111]: Time spent on flushing to /var/log/journal/10716aa21f2a46b8822367f9957f6e79 is 19.715ms for 853 entries. Jul 11 00:15:31.075584 systemd-journald[1111]: System Journal (/var/log/journal/10716aa21f2a46b8822367f9957f6e79) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:15:31.104163 systemd-journald[1111]: Received client request to flush runtime journal. Jul 11 00:15:31.104210 kernel: loop0: detected capacity change from 0 to 114432 Jul 11 00:15:31.077936 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:15:31.079194 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:15:31.080339 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:15:31.083029 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:15:31.089702 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:15:31.090881 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:15:31.092557 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:15:31.102096 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:15:31.107280 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:15:31.113091 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:15:31.117548 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:15:31.118889 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:15:31.121182 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:15:31.130107 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:15:31.131695 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:15:31.132417 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:15:31.146299 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 11 00:15:31.146319 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 11 00:15:31.149117 kernel: loop1: detected capacity change from 0 to 211168 Jul 11 00:15:31.150372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:15:31.187923 kernel: loop2: detected capacity change from 0 to 114328 Jul 11 00:15:31.236916 kernel: loop3: detected capacity change from 0 to 114432 Jul 11 00:15:31.241935 kernel: loop4: detected capacity change from 0 to 211168 Jul 11 00:15:31.252068 kernel: loop5: detected capacity change from 0 to 114328 Jul 11 00:15:31.260518 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:15:31.261330 (sd-merge)[1179]: Merged extensions into '/usr'. Jul 11 00:15:31.264945 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:15:31.264960 systemd[1]: Reloading... Jul 11 00:15:31.316915 zram_generator::config[1206]: No configuration found. Jul 11 00:15:31.345962 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:15:31.410949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:15:31.447091 systemd[1]: Reloading finished in 181 ms. Jul 11 00:15:31.483942 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:15:31.485129 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:15:31.498146 systemd[1]: Starting ensure-sysext.service... Jul 11 00:15:31.499848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:15:31.510979 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:15:31.510993 systemd[1]: Reloading... Jul 11 00:15:31.519097 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:15:31.519353 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:15:31.520006 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:15:31.520226 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jul 11 00:15:31.520277 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jul 11 00:15:31.522443 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:15:31.522457 systemd-tmpfiles[1241]: Skipping /boot Jul 11 00:15:31.529534 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:15:31.529553 systemd-tmpfiles[1241]: Skipping /boot Jul 11 00:15:31.558912 zram_generator::config[1268]: No configuration found. Jul 11 00:15:31.633518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:15:31.669712 systemd[1]: Reloading finished in 158 ms. Jul 11 00:15:31.689650 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:15:31.697415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:15:31.704739 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:15:31.707142 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:15:31.709285 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:15:31.714103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:15:31.722156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:15:31.724488 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:15:31.728986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:15:31.731101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:15:31.736469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:15:31.739162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:15:31.740054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:15:31.749079 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:15:31.750491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:15:31.750621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:15:31.751997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:15:31.752107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:15:31.753377 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:15:31.753493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:15:31.756411 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:15:31.758510 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jul 11 00:15:31.761017 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:15:31.762406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:15:31.771109 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:15:31.772610 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:15:31.776468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:15:31.780212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:15:31.783190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:15:31.788074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:15:31.789000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:15:31.789509 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:15:31.791343 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:15:31.792907 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:15:31.794194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:15:31.794306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:15:31.795767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:15:31.795921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:15:31.797436 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:15:31.797555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:15:31.801728 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:15:31.809101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:15:31.814200 augenrules[1363]: No rules Jul 11 00:15:31.816210 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:15:31.818404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:15:31.824042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:15:31.825917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:15:31.826982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:15:31.828946 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:15:31.829703 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:15:31.831890 systemd[1]: Finished ensure-sysext.service. Jul 11 00:15:31.833285 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:15:31.834422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:15:31.835945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:15:31.837091 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:15:31.837217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:15:31.853264 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 11 00:15:31.855171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:15:31.857169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:15:31.858467 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:15:31.858614 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:15:31.861193 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:15:31.861238 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:15:31.863032 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:15:31.885899 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1365) Jul 11 00:15:31.893852 systemd-resolved[1309]: Positive Trust Anchors: Jul 11 00:15:31.893910 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:15:31.893943 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:15:31.904114 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jul 11 00:15:31.913289 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:15:31.914310 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:15:31.923431 systemd-networkd[1376]: lo: Link UP Jul 11 00:15:31.923438 systemd-networkd[1376]: lo: Gained carrier Jul 11 00:15:31.924202 systemd-networkd[1376]: Enumeration completed Jul 11 00:15:31.924298 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:15:31.925243 systemd[1]: Reached target network.target - Network. Jul 11 00:15:31.932173 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:15:31.932181 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:15:31.938060 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:15:31.940090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:15:31.942075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:15:31.943155 systemd-networkd[1376]: eth0: Link UP Jul 11 00:15:31.943167 systemd-networkd[1376]: eth0: Gained carrier Jul 11 00:15:31.943200 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:15:31.950234 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:15:31.956186 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:15:31.957129 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:15:31.958753 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:15:31.960014 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:15:31.961378 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Jul 11 00:15:31.962475 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:15:31.962528 systemd-timesyncd[1385]: Initial clock synchronization to Fri 2025-07-11 00:15:32.259059 UTC. Jul 11 00:15:31.982065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:15:31.994084 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:15:31.996360 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:15:32.013124 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:15:32.028006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:15:32.038318 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:15:32.039469 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:15:32.041096 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:15:32.042019 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:15:32.043013 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:15:32.044113 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:15:32.045007 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:15:32.045960 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:15:32.046847 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:15:32.046885 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:15:32.047582 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:15:32.049023 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:15:32.051168 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:15:32.057622 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:15:32.059985 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:15:32.061398 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:15:32.062346 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:15:32.063081 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:15:32.063774 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:15:32.063805 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:15:32.064689 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:15:32.066529 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:15:32.067616 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:15:32.071050 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:15:32.073514 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:15:32.075298 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:15:32.077085 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:15:32.081909 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:15:32.083728 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:15:32.086135 jq[1412]: false Jul 11 00:15:32.087685 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:15:32.089880 extend-filesystems[1413]: Found loop3 Jul 11 00:15:32.090865 extend-filesystems[1413]: Found loop4 Jul 11 00:15:32.091940 extend-filesystems[1413]: Found loop5 Jul 11 00:15:32.091940 extend-filesystems[1413]: Found vda Jul 11 00:15:32.091940 extend-filesystems[1413]: Found vda1 Jul 11 00:15:32.091940 extend-filesystems[1413]: Found vda2 Jul 11 00:15:32.091940 extend-filesystems[1413]: Found vda3 Jul 11 00:15:32.091940 extend-filesystems[1413]: Found usr Jul 11 00:15:32.091940 extend-filesystems[1413]: Found vda4 Jul 11 00:15:32.091940 extend-filesystems[1413]: Found vda6 Jul 11 00:15:32.091940 extend-filesystems[1413]: Found vda7 Jul 11 00:15:32.091940 extend-filesystems[1413]: Found vda9 Jul 11 00:15:32.091940 extend-filesystems[1413]: Checking size of /dev/vda9 Jul 11 00:15:32.091517 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:15:32.096052 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:15:32.096464 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:15:32.098081 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:15:32.103035 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:15:32.106649 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:15:32.114053 jq[1430]: true Jul 11 00:15:32.111726 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:15:32.111934 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:15:32.112198 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:15:32.112332 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:15:32.114251 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:15:32.114404 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:15:32.117993 extend-filesystems[1413]: Resized partition /dev/vda9 Jul 11 00:15:32.131773 dbus-daemon[1411]: [system] SELinux support is enabled Jul 11 00:15:32.131956 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:15:32.137479 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:15:32.137502 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:15:32.139160 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:15:32.147270 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1347) Jul 11 00:15:32.147300 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:15:32.138961 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:15:32.147422 jq[1436]: true Jul 11 00:15:32.138978 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:15:32.144601 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:15:32.169165 tar[1435]: linux-arm64/LICENSE Jul 11 00:15:32.171449 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:15:32.182943 update_engine[1426]: I20250711 00:15:32.181779 1426 main.cc:92] Flatcar Update Engine starting Jul 11 00:15:32.183885 tar[1435]: linux-arm64/helm Jul 11 00:15:32.184470 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:15:32.184470 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:15:32.184470 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:15:32.189621 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Jul 11 00:15:32.192625 update_engine[1426]: I20250711 00:15:32.189307 1426 update_check_scheduler.cc:74] Next update check in 7m15s Jul 11 00:15:32.188267 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:15:32.188440 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:15:32.192939 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:15:32.195659 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:15:32.197040 systemd-logind[1421]: New seat seat0. Jul 11 00:15:32.202366 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:15:32.203227 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:15:32.222486 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:15:32.225847 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:15:32.230313 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:15:32.257946 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:15:32.357839 containerd[1445]: time="2025-07-11T00:15:32.357714438Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:15:32.383976 containerd[1445]: time="2025-07-11T00:15:32.383928119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:15:32.385577 containerd[1445]: time="2025-07-11T00:15:32.385539953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:15:32.385577 containerd[1445]: time="2025-07-11T00:15:32.385575836Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:15:32.385666 containerd[1445]: time="2025-07-11T00:15:32.385603921Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:15:32.385785 containerd[1445]: time="2025-07-11T00:15:32.385761599Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:15:32.385814 containerd[1445]: time="2025-07-11T00:15:32.385787069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:15:32.385866 containerd[1445]: time="2025-07-11T00:15:32.385846308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:15:32.385901 containerd[1445]: time="2025-07-11T00:15:32.385863938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:15:32.386075 containerd[1445]: time="2025-07-11T00:15:32.386049991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:15:32.386102 containerd[1445]: time="2025-07-11T00:15:32.386073180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:15:32.386102 containerd[1445]: time="2025-07-11T00:15:32.386086869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:15:32.386102 containerd[1445]: time="2025-07-11T00:15:32.386096576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:15:32.386188 containerd[1445]: time="2025-07-11T00:15:32.386168799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:15:32.386419 containerd[1445]: time="2025-07-11T00:15:32.386394883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:15:32.387022 containerd[1445]: time="2025-07-11T00:15:32.386498591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:15:32.387022 containerd[1445]: time="2025-07-11T00:15:32.386517508Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:15:32.387022 containerd[1445]: time="2025-07-11T00:15:32.386598898Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:15:32.387022 containerd[1445]: time="2025-07-11T00:15:32.386652951Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:15:32.390466 containerd[1445]: time="2025-07-11T00:15:32.390432579Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:15:32.390521 containerd[1445]: time="2025-07-11T00:15:32.390491277Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:15:32.390521 containerd[1445]: time="2025-07-11T00:15:32.390508327Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:15:32.390578 containerd[1445]: time="2025-07-11T00:15:32.390522888Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:15:32.390597 containerd[1445]: time="2025-07-11T00:15:32.390537324Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:15:32.390746 containerd[1445]: time="2025-07-11T00:15:32.390722091Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:15:32.391035 containerd[1445]: time="2025-07-11T00:15:32.391003472Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:15:32.391146 containerd[1445]: time="2025-07-11T00:15:32.391124437Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:15:32.391180 containerd[1445]: time="2025-07-11T00:15:32.391148166Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:15:32.391180 containerd[1445]: time="2025-07-11T00:15:32.391166667Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:15:32.391217 containerd[1445]: time="2025-07-11T00:15:32.391180813Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:15:32.391217 containerd[1445]: time="2025-07-11T00:15:32.391194129Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:15:32.391217 containerd[1445]: time="2025-07-11T00:15:32.391206491Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:15:32.391267 containerd[1445]: time="2025-07-11T00:15:32.391220430Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:15:32.391267 containerd[1445]: time="2025-07-11T00:15:32.391235032Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:15:32.391267 containerd[1445]: time="2025-07-11T00:15:32.391247311Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:15:32.391267 containerd[1445]: time="2025-07-11T00:15:32.391259507Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:15:32.391336 containerd[1445]: time="2025-07-11T00:15:32.391271205Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:15:32.391336 containerd[1445]: time="2025-07-11T00:15:32.391291159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391336 containerd[1445]: time="2025-07-11T00:15:32.391307171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391336 containerd[1445]: time="2025-07-11T00:15:32.391319990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391408 containerd[1445]: time="2025-07-11T00:15:32.391338906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391408 containerd[1445]: time="2025-07-11T00:15:32.391352098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391408 containerd[1445]: time="2025-07-11T00:15:32.391364626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391408 containerd[1445]: time="2025-07-11T00:15:32.391376034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391408 containerd[1445]: time="2025-07-11T00:15:32.391388520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391408 containerd[1445]: time="2025-07-11T00:15:32.391401961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391510 containerd[1445]: time="2025-07-11T00:15:32.391416189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391510 containerd[1445]: time="2025-07-11T00:15:32.391427929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391510 containerd[1445]: time="2025-07-11T00:15:32.391439503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391510 containerd[1445]: time="2025-07-11T00:15:32.391451575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391510 containerd[1445]: time="2025-07-11T00:15:32.391468293Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:15:32.391510 containerd[1445]: time="2025-07-11T00:15:32.391489408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391615 containerd[1445]: time="2025-07-11T00:15:32.391586064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.391615 containerd[1445]: time="2025-07-11T00:15:32.391598218Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:15:32.392296 containerd[1445]: time="2025-07-11T00:15:32.391725821Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:15:32.392296 containerd[1445]: time="2025-07-11T00:15:32.391749425Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:15:32.392296 containerd[1445]: time="2025-07-11T00:15:32.391761165Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:15:32.392296 containerd[1445]: time="2025-07-11T00:15:32.391773527Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:15:32.392296 containerd[1445]: time="2025-07-11T00:15:32.391945060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.392296 containerd[1445]: time="2025-07-11T00:15:32.391958459Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:15:32.392296 containerd[1445]: time="2025-07-11T00:15:32.391968498Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:15:32.392296 containerd[1445]: time="2025-07-11T00:15:32.391983764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:15:32.392515 containerd[1445]: time="2025-07-11T00:15:32.392303061Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:15:32.392515 containerd[1445]: time="2025-07-11T00:15:32.392362922Z" level=info msg="Connect containerd service" Jul 11 00:15:32.392515 containerd[1445]: time="2025-07-11T00:15:32.392387148Z" level=info msg="using legacy CRI server" Jul 11 00:15:32.392515 containerd[1445]: time="2025-07-11T00:15:32.392394076Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:15:32.392515 containerd[1445]: time="2025-07-11T00:15:32.392476959Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:15:32.393260 containerd[1445]: time="2025-07-11T00:15:32.393228927Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:15:32.393465 containerd[1445]: time="2025-07-11T00:15:32.393434726Z" level=info msg="Start subscribing containerd event" Jul 11 00:15:32.393706 containerd[1445]: time="2025-07-11T00:15:32.393686986Z" level=info msg="Start recovering state" Jul 11 00:15:32.394087 containerd[1445]: time="2025-07-11T00:15:32.394067347Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:15:32.394138 containerd[1445]: time="2025-07-11T00:15:32.394116422Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:15:32.396005 containerd[1445]: time="2025-07-11T00:15:32.395973671Z" level=info msg="Start event monitor" Jul 11 00:15:32.396005 containerd[1445]: time="2025-07-11T00:15:32.396010965Z" level=info msg="Start snapshots syncer" Jul 11 00:15:32.396090 containerd[1445]: time="2025-07-11T00:15:32.396020672Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:15:32.396090 containerd[1445]: time="2025-07-11T00:15:32.396028802Z" level=info msg="Start streaming server" Jul 11 00:15:32.396253 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:15:32.397583 containerd[1445]: time="2025-07-11T00:15:32.397453049Z" level=info msg="containerd successfully booted in 0.040626s" Jul 11 00:15:32.567980 tar[1435]: linux-arm64/README.md Jul 11 00:15:32.581369 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:15:33.775080 systemd-networkd[1376]: eth0: Gained IPv6LL Jul 11 00:15:33.780611 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:15:33.782230 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:15:33.788122 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:15:33.790341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:33.792281 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:15:33.816767 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:15:33.818943 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:15:33.820395 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:15:33.822212 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:15:33.971454 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:15:33.990625 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:15:34.001135 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:15:34.006050 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:15:34.006218 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:15:34.009103 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:15:34.020549 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:15:34.024087 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:15:34.026471 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 00:15:34.027840 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:15:34.367642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:34.369009 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:15:34.371777 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:15:34.374966 systemd[1]: Startup finished in 566ms (kernel) + 4.764s (initrd) + 3.941s (userspace) = 9.272s. Jul 11 00:15:34.811012 kubelet[1524]: E0711 00:15:34.810953 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:15:34.813569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:15:34.813709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:15:38.586574 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:15:38.587755 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:55952.service - OpenSSH per-connection server daemon (10.0.0.1:55952). Jul 11 00:15:38.645799 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 55952 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:15:38.649465 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:38.660059 systemd-logind[1421]: New session 1 of user core. Jul 11 00:15:38.661499 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:15:38.681555 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:15:38.690918 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:15:38.696522 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:15:38.702145 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:15:38.776136 systemd[1541]: Queued start job for default target default.target. Jul 11 00:15:38.785681 systemd[1541]: Created slice app.slice - User Application Slice. Jul 11 00:15:38.785709 systemd[1541]: Reached target paths.target - Paths. Jul 11 00:15:38.785721 systemd[1541]: Reached target timers.target - Timers. Jul 11 00:15:38.786786 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:15:38.795199 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:15:38.795244 systemd[1541]: Reached target sockets.target - Sockets. Jul 11 00:15:38.795256 systemd[1541]: Reached target basic.target - Basic System. Jul 11 00:15:38.795287 systemd[1541]: Reached target default.target - Main User Target. Jul 11 00:15:38.795309 systemd[1541]: Startup finished in 88ms. Jul 11 00:15:38.795457 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:15:38.797169 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:15:38.857433 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:55962.service - OpenSSH per-connection server daemon (10.0.0.1:55962). Jul 11 00:15:38.921688 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 55962 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:15:38.923124 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:38.927630 systemd-logind[1421]: New session 2 of user core. Jul 11 00:15:38.941123 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:15:38.993494 sshd[1552]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:39.003215 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:55962.service: Deactivated successfully. Jul 11 00:15:39.004523 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:15:39.006952 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:15:39.007987 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:55964.service - OpenSSH per-connection server daemon (10.0.0.1:55964). Jul 11 00:15:39.008971 systemd-logind[1421]: Removed session 2. Jul 11 00:15:39.038727 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 55964 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:15:39.039802 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:39.042919 systemd-logind[1421]: New session 3 of user core. Jul 11 00:15:39.055027 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:15:39.103208 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:39.113105 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:55964.service: Deactivated successfully. Jul 11 00:15:39.114316 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:15:39.115883 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:15:39.122211 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:55976.service - OpenSSH per-connection server daemon (10.0.0.1:55976). Jul 11 00:15:39.122946 systemd-logind[1421]: Removed session 3. Jul 11 00:15:39.149245 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 55976 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:15:39.150312 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:39.153956 systemd-logind[1421]: New session 4 of user core. Jul 11 00:15:39.161028 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:15:39.211839 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:39.225090 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:55976.service: Deactivated successfully. Jul 11 00:15:39.227966 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:15:39.229175 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:15:39.230147 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:55986.service - OpenSSH per-connection server daemon (10.0.0.1:55986). Jul 11 00:15:39.231096 systemd-logind[1421]: Removed session 4. Jul 11 00:15:39.260587 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 55986 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:15:39.261615 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:39.265321 systemd-logind[1421]: New session 5 of user core. Jul 11 00:15:39.275040 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:15:39.340770 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:15:39.342946 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:15:39.357640 sudo[1576]: pam_unix(sudo:session): session closed for user root Jul 11 00:15:39.359136 sshd[1573]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:39.372252 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:55986.service: Deactivated successfully. Jul 11 00:15:39.373586 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:15:39.375681 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:15:39.385099 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:55990.service - OpenSSH per-connection server daemon (10.0.0.1:55990). Jul 11 00:15:39.385865 systemd-logind[1421]: Removed session 5. Jul 11 00:15:39.412709 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 55990 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:15:39.413853 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:39.417384 systemd-logind[1421]: New session 6 of user core. Jul 11 00:15:39.428019 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:15:39.477928 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:15:39.478415 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:15:39.481117 sudo[1585]: pam_unix(sudo:session): session closed for user root Jul 11 00:15:39.485221 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:15:39.485487 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:15:39.498091 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:15:39.499227 auditctl[1588]: No rules Jul 11 00:15:39.500022 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:15:39.500966 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:15:39.502488 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:15:39.524638 augenrules[1606]: No rules Jul 11 00:15:39.525714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:15:39.526981 sudo[1584]: pam_unix(sudo:session): session closed for user root Jul 11 00:15:39.528255 sshd[1581]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:39.540229 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:55990.service: Deactivated successfully. Jul 11 00:15:39.541743 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:15:39.542956 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:15:39.555219 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:56004.service - OpenSSH per-connection server daemon (10.0.0.1:56004). Jul 11 00:15:39.556145 systemd-logind[1421]: Removed session 6. Jul 11 00:15:39.582987 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 56004 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:15:39.584106 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:39.587728 systemd-logind[1421]: New session 7 of user core. Jul 11 00:15:39.593010 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:15:39.643113 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:15:39.643365 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:15:39.945176 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:15:39.945313 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:15:40.198540 dockerd[1635]: time="2025-07-11T00:15:40.198139466Z" level=info msg="Starting up" Jul 11 00:15:40.349129 dockerd[1635]: time="2025-07-11T00:15:40.349086841Z" level=info msg="Loading containers: start." Jul 11 00:15:40.439935 kernel: Initializing XFRM netlink socket Jul 11 00:15:40.506351 systemd-networkd[1376]: docker0: Link UP Jul 11 00:15:40.522010 dockerd[1635]: time="2025-07-11T00:15:40.521970492Z" level=info msg="Loading containers: done." Jul 11 00:15:40.534090 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3793569740-merged.mount: Deactivated successfully. Jul 11 00:15:40.536592 dockerd[1635]: time="2025-07-11T00:15:40.536220704Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:15:40.536592 dockerd[1635]: time="2025-07-11T00:15:40.536307030Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:15:40.536592 dockerd[1635]: time="2025-07-11T00:15:40.536411302Z" level=info msg="Daemon has completed initialization" Jul 11 00:15:40.561171 dockerd[1635]: time="2025-07-11T00:15:40.561056419Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:15:40.562134 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:15:41.082231 containerd[1445]: time="2025-07-11T00:15:41.082196404Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 11 00:15:41.664323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854111450.mount: Deactivated successfully. Jul 11 00:15:42.643465 containerd[1445]: time="2025-07-11T00:15:42.643419147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:42.643929 containerd[1445]: time="2025-07-11T00:15:42.643899468Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 11 00:15:42.644770 containerd[1445]: time="2025-07-11T00:15:42.644737282Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:42.647748 containerd[1445]: time="2025-07-11T00:15:42.647719090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:42.649049 containerd[1445]: time="2025-07-11T00:15:42.648998976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.566760482s" Jul 11 00:15:42.649049 containerd[1445]: time="2025-07-11T00:15:42.649035731Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 11 00:15:42.652236 containerd[1445]: time="2025-07-11T00:15:42.652144970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 11 00:15:43.748692 containerd[1445]: time="2025-07-11T00:15:43.748546666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:43.749414 containerd[1445]: time="2025-07-11T00:15:43.749238965Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 11 00:15:43.750047 containerd[1445]: time="2025-07-11T00:15:43.749989316Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:43.752836 containerd[1445]: time="2025-07-11T00:15:43.752802607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:43.755657 containerd[1445]: time="2025-07-11T00:15:43.755618360Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.103440565s" Jul 11 00:15:43.755726 containerd[1445]: time="2025-07-11T00:15:43.755661404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 11 00:15:43.756171 containerd[1445]: time="2025-07-11T00:15:43.756150826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 11 00:15:44.815957 containerd[1445]: time="2025-07-11T00:15:44.815896339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:44.816454 containerd[1445]: time="2025-07-11T00:15:44.816405514Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 11 00:15:44.817277 containerd[1445]: time="2025-07-11T00:15:44.817246429Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:44.820280 containerd[1445]: time="2025-07-11T00:15:44.820251709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:44.822391 containerd[1445]: time="2025-07-11T00:15:44.822351557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.066174116s" Jul 11 00:15:44.822436 containerd[1445]: time="2025-07-11T00:15:44.822390526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 11 00:15:44.822846 containerd[1445]: time="2025-07-11T00:15:44.822822811Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 11 00:15:45.003818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:15:45.014112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:45.116336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:45.119875 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:15:45.165965 kubelet[1855]: E0711 00:15:45.165906 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:15:45.169010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:15:45.169150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:15:45.789712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101334129.mount: Deactivated successfully. Jul 11 00:15:46.161322 containerd[1445]: time="2025-07-11T00:15:46.161186901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:46.162011 containerd[1445]: time="2025-07-11T00:15:46.161953380Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 11 00:15:46.162783 containerd[1445]: time="2025-07-11T00:15:46.162748299Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:46.165486 containerd[1445]: time="2025-07-11T00:15:46.165449177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:46.166316 containerd[1445]: time="2025-07-11T00:15:46.166279578Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.343417585s" Jul 11 00:15:46.166356 containerd[1445]: time="2025-07-11T00:15:46.166312566Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 11 00:15:46.166896 containerd[1445]: time="2025-07-11T00:15:46.166808385Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 11 00:15:46.730050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729529112.mount: Deactivated successfully. Jul 11 00:15:47.454585 containerd[1445]: time="2025-07-11T00:15:47.454535545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:47.455566 containerd[1445]: time="2025-07-11T00:15:47.455304614Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 11 00:15:47.456236 containerd[1445]: time="2025-07-11T00:15:47.456203610Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:47.459545 containerd[1445]: time="2025-07-11T00:15:47.459508625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:47.460889 containerd[1445]: time="2025-07-11T00:15:47.460798688Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.293961345s" Jul 11 00:15:47.460889 containerd[1445]: time="2025-07-11T00:15:47.460831130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 11 00:15:47.461350 containerd[1445]: time="2025-07-11T00:15:47.461301914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:15:47.932279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569013379.mount: Deactivated successfully. Jul 11 00:15:47.936326 containerd[1445]: time="2025-07-11T00:15:47.936280831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:47.937314 containerd[1445]: time="2025-07-11T00:15:47.937279081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 00:15:47.938220 containerd[1445]: time="2025-07-11T00:15:47.938173695Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:47.940311 containerd[1445]: time="2025-07-11T00:15:47.940278897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:47.941368 containerd[1445]: time="2025-07-11T00:15:47.941297287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 479.958309ms" Jul 11 00:15:47.941368 containerd[1445]: time="2025-07-11T00:15:47.941332663Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:15:47.942100 containerd[1445]: time="2025-07-11T00:15:47.942020930Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 11 00:15:48.379082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970960510.mount: Deactivated successfully. Jul 11 00:15:50.110404 containerd[1445]: time="2025-07-11T00:15:50.110351299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:50.111525 containerd[1445]: time="2025-07-11T00:15:50.111107786Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 11 00:15:50.112352 containerd[1445]: time="2025-07-11T00:15:50.112314458Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:50.115639 containerd[1445]: time="2025-07-11T00:15:50.115604088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:50.117015 containerd[1445]: time="2025-07-11T00:15:50.116948178Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.174897353s" Jul 11 00:15:50.117015 containerd[1445]: time="2025-07-11T00:15:50.116979644Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 11 00:15:53.964313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:53.975092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:53.994584 systemd[1]: Reloading requested from client PID 2015 ('systemctl') (unit session-7.scope)... Jul 11 00:15:53.994600 systemd[1]: Reloading... Jul 11 00:15:54.056918 zram_generator::config[2055]: No configuration found. Jul 11 00:15:54.172769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:15:54.226153 systemd[1]: Reloading finished in 231 ms. Jul 11 00:15:54.261979 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:54.264170 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:15:54.264358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:54.265793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:54.371706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:54.375605 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:15:54.410631 kubelet[2101]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:15:54.410631 kubelet[2101]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:15:54.410631 kubelet[2101]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:15:54.410943 kubelet[2101]: I0711 00:15:54.410677 2101 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:15:54.890825 kubelet[2101]: I0711 00:15:54.890714 2101 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:15:54.890825 kubelet[2101]: I0711 00:15:54.890750 2101 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:15:54.891041 kubelet[2101]: I0711 00:15:54.891021 2101 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:15:54.921556 kubelet[2101]: E0711 00:15:54.921505 2101 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 11 00:15:54.922158 kubelet[2101]: I0711 00:15:54.922092 2101 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:15:54.936790 kubelet[2101]: E0711 00:15:54.936740 2101 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:15:54.936790 kubelet[2101]: I0711 00:15:54.936784 2101 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:15:54.940815 kubelet[2101]: I0711 00:15:54.940782 2101 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:15:54.941141 kubelet[2101]: I0711 00:15:54.941106 2101 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:15:54.941285 kubelet[2101]: I0711 00:15:54.941134 2101 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:15:54.941360 kubelet[2101]: I0711 00:15:54.941344 2101 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:15:54.941360 kubelet[2101]: I0711 00:15:54.941353 2101 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:15:54.941557 kubelet[2101]: I0711 00:15:54.941531 2101 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:15:54.944302 kubelet[2101]: I0711 00:15:54.944274 2101 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:15:54.944302 kubelet[2101]: I0711 00:15:54.944298 2101 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:15:54.944388 kubelet[2101]: I0711 00:15:54.944324 2101 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:15:54.945373 kubelet[2101]: I0711 00:15:54.945347 2101 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:15:54.946386 kubelet[2101]: I0711 00:15:54.946366 2101 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:15:54.947160 kubelet[2101]: I0711 00:15:54.947140 2101 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:15:54.947229 kubelet[2101]: E0711 00:15:54.947159 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:15:54.947762 kubelet[2101]: W0711 00:15:54.947315 2101 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:15:54.947974 kubelet[2101]: E0711 00:15:54.947949 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:15:54.949833 kubelet[2101]: I0711 00:15:54.949795 2101 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:15:54.949833 kubelet[2101]: I0711 00:15:54.949835 2101 server.go:1289] "Started kubelet" Jul 11 00:15:54.950683 kubelet[2101]: I0711 00:15:54.949911 2101 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:15:54.953128 kubelet[2101]: I0711 00:15:54.952573 2101 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:15:54.953128 kubelet[2101]: I0711 00:15:54.952971 2101 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:15:54.955395 kubelet[2101]: I0711 00:15:54.955344 2101 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:15:54.955505 kubelet[2101]: I0711 00:15:54.955478 2101 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:15:54.955557 kubelet[2101]: I0711 00:15:54.955540 2101 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:15:54.955621 kubelet[2101]: E0711 00:15:54.955591 2101 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:15:54.955621 kubelet[2101]: I0711 00:15:54.955616 2101 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:15:54.956035 kubelet[2101]: I0711 00:15:54.955936 2101 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:15:54.956035 kubelet[2101]: I0711 00:15:54.955994 2101 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:15:54.956432 kubelet[2101]: E0711 00:15:54.956301 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:15:54.956846 kubelet[2101]: I0711 00:15:54.956812 2101 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:15:54.957043 kubelet[2101]: I0711 00:15:54.956925 2101 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:15:54.957228 kubelet[2101]: E0711 00:15:54.955389 2101 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a3581b9c1b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:15:54.949812657 +0000 UTC m=+0.570720237,LastTimestamp:2025-07-11 00:15:54.949812657 +0000 UTC m=+0.570720237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:15:54.957871 kubelet[2101]: E0711 00:15:54.957790 2101 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:15:54.957978 kubelet[2101]: I0711 00:15:54.957905 2101 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:15:54.958201 kubelet[2101]: E0711 00:15:54.958018 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" Jul 11 00:15:54.968300 kubelet[2101]: I0711 00:15:54.968278 2101 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:15:54.968389 kubelet[2101]: I0711 00:15:54.968322 2101 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:15:54.968389 kubelet[2101]: I0711 00:15:54.968363 2101 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:15:54.971664 kubelet[2101]: I0711 00:15:54.971609 2101 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:15:54.972720 kubelet[2101]: I0711 00:15:54.972687 2101 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:15:54.972720 kubelet[2101]: I0711 00:15:54.972716 2101 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:15:54.972797 kubelet[2101]: I0711 00:15:54.972734 2101 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:15:54.972797 kubelet[2101]: I0711 00:15:54.972741 2101 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:15:54.972834 kubelet[2101]: E0711 00:15:54.972796 2101 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:15:54.973369 kubelet[2101]: E0711 00:15:54.973249 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:15:55.048954 kubelet[2101]: I0711 00:15:55.048923 2101 policy_none.go:49] "None policy: Start" Jul 11 00:15:55.049406 kubelet[2101]: I0711 00:15:55.049103 2101 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:15:55.049406 kubelet[2101]: I0711 00:15:55.049122 2101 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:15:55.054506 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:15:55.055965 kubelet[2101]: E0711 00:15:55.055924 2101 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:15:55.067465 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:15:55.069857 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:15:55.073531 kubelet[2101]: E0711 00:15:55.073510 2101 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:15:55.080706 kubelet[2101]: E0711 00:15:55.080532 2101 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:15:55.081192 kubelet[2101]: I0711 00:15:55.080719 2101 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:15:55.081192 kubelet[2101]: I0711 00:15:55.080732 2101 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:15:55.081192 kubelet[2101]: I0711 00:15:55.080944 2101 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:15:55.081965 kubelet[2101]: E0711 00:15:55.081946 2101 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:15:55.082068 kubelet[2101]: E0711 00:15:55.082054 2101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:15:55.158594 kubelet[2101]: E0711 00:15:55.158510 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" Jul 11 00:15:55.182913 kubelet[2101]: I0711 00:15:55.182530 2101 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:15:55.183019 kubelet[2101]: E0711 00:15:55.182932 2101 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Jul 11 00:15:55.283143 systemd[1]: Created slice kubepods-burstable-podb30c5d92506be0ca592f2c2fd43d975f.slice - libcontainer container kubepods-burstable-podb30c5d92506be0ca592f2c2fd43d975f.slice. Jul 11 00:15:55.305299 kubelet[2101]: E0711 00:15:55.305251 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:15:55.308175 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 11 00:15:55.309638 kubelet[2101]: E0711 00:15:55.309591 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:15:55.356319 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 11 00:15:55.357283 kubelet[2101]: I0711 00:15:55.357260 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:15:55.357440 kubelet[2101]: I0711 00:15:55.357290 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:15:55.357440 kubelet[2101]: I0711 00:15:55.357312 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:15:55.357440 kubelet[2101]: I0711 00:15:55.357327 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b30c5d92506be0ca592f2c2fd43d975f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b30c5d92506be0ca592f2c2fd43d975f\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:15:55.357440 kubelet[2101]: I0711 00:15:55.357347 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:15:55.357440 kubelet[2101]: I0711 00:15:55.357363 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:15:55.357971 kubelet[2101]: I0711 00:15:55.357930 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:15:55.357971 kubelet[2101]: I0711 00:15:55.357961 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b30c5d92506be0ca592f2c2fd43d975f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b30c5d92506be0ca592f2c2fd43d975f\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:15:55.358077 kubelet[2101]: I0711 00:15:55.357980 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b30c5d92506be0ca592f2c2fd43d975f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b30c5d92506be0ca592f2c2fd43d975f\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:15:55.358427 kubelet[2101]: E0711 00:15:55.358399 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:15:55.384414 kubelet[2101]: I0711 00:15:55.384360 2101 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:15:55.384714 kubelet[2101]: E0711 00:15:55.384691 2101 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Jul 11 00:15:55.559033 kubelet[2101]: E0711 00:15:55.558991 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" Jul 11 00:15:55.606520 kubelet[2101]: E0711 00:15:55.606492 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:55.607154 containerd[1445]: time="2025-07-11T00:15:55.607096263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b30c5d92506be0ca592f2c2fd43d975f,Namespace:kube-system,Attempt:0,}" Jul 11 00:15:55.610403 kubelet[2101]: E0711 00:15:55.610371 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:55.610917 containerd[1445]: time="2025-07-11T00:15:55.610790724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 11 00:15:55.659591 kubelet[2101]: E0711 00:15:55.659548 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:55.659980 containerd[1445]: time="2025-07-11T00:15:55.659942888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 11 00:15:55.786111 kubelet[2101]: I0711 00:15:55.786081 2101 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:15:55.786398 kubelet[2101]: E0711 00:15:55.786374 2101 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Jul 11 00:15:55.883544 kubelet[2101]: E0711 00:15:55.883417 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:15:55.916384 kubelet[2101]: E0711 00:15:55.916333 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:15:56.012708 kubelet[2101]: E0711 00:15:56.012659 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:15:56.050687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236605821.mount: Deactivated successfully. Jul 11 00:15:56.054811 containerd[1445]: time="2025-07-11T00:15:56.054764401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:15:56.055951 containerd[1445]: time="2025-07-11T00:15:56.055914889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 11 00:15:56.058088 containerd[1445]: time="2025-07-11T00:15:56.058054984Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:15:56.059548 containerd[1445]: time="2025-07-11T00:15:56.059253865Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:15:56.060329 containerd[1445]: time="2025-07-11T00:15:56.060290502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:15:56.061768 containerd[1445]: time="2025-07-11T00:15:56.061714201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:15:56.063078 containerd[1445]: time="2025-07-11T00:15:56.062935475Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:15:56.063822 containerd[1445]: time="2025-07-11T00:15:56.063794446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:15:56.066920 containerd[1445]: time="2025-07-11T00:15:56.066469023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 459.295268ms" Jul 11 00:15:56.068649 containerd[1445]: time="2025-07-11T00:15:56.068605432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 457.749838ms" Jul 11 00:15:56.069250 containerd[1445]: time="2025-07-11T00:15:56.069210461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 409.209395ms" Jul 11 00:15:56.206703 containerd[1445]: time="2025-07-11T00:15:56.206515755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:15:56.206703 containerd[1445]: time="2025-07-11T00:15:56.206568835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:15:56.206703 containerd[1445]: time="2025-07-11T00:15:56.206584458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:15:56.206849 containerd[1445]: time="2025-07-11T00:15:56.206665219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:15:56.208383 containerd[1445]: time="2025-07-11T00:15:56.208054787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:15:56.208666 containerd[1445]: time="2025-07-11T00:15:56.208292384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:15:56.208666 containerd[1445]: time="2025-07-11T00:15:56.208329560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:15:56.208666 containerd[1445]: time="2025-07-11T00:15:56.208422499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:15:56.208848 containerd[1445]: time="2025-07-11T00:15:56.208786646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:15:56.209373 containerd[1445]: time="2025-07-11T00:15:56.209190573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:15:56.209373 containerd[1445]: time="2025-07-11T00:15:56.209220818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:15:56.209499 containerd[1445]: time="2025-07-11T00:15:56.209432096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:15:56.233056 systemd[1]: Started cri-containerd-0da0a539a3b10558d0fc166242dfbd362db43e55b8f5263f2d70edf0fe558878.scope - libcontainer container 0da0a539a3b10558d0fc166242dfbd362db43e55b8f5263f2d70edf0fe558878. Jul 11 00:15:56.234358 systemd[1]: Started cri-containerd-3956f7936f29008b0b1869f1c62a105e478e0cd59dae5d3e61980645d63b02c5.scope - libcontainer container 3956f7936f29008b0b1869f1c62a105e478e0cd59dae5d3e61980645d63b02c5. Jul 11 00:15:56.235539 systemd[1]: Started cri-containerd-7094ab302e11824a21151df432df851213807242df44b09250f16250987c9943.scope - libcontainer container 7094ab302e11824a21151df432df851213807242df44b09250f16250987c9943. Jul 11 00:15:56.265723 containerd[1445]: time="2025-07-11T00:15:56.265568501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0da0a539a3b10558d0fc166242dfbd362db43e55b8f5263f2d70edf0fe558878\"" Jul 11 00:15:56.266064 containerd[1445]: time="2025-07-11T00:15:56.266039168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b30c5d92506be0ca592f2c2fd43d975f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3956f7936f29008b0b1869f1c62a105e478e0cd59dae5d3e61980645d63b02c5\"" Jul 11 00:15:56.268575 kubelet[2101]: E0711 00:15:56.268532 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:56.269103 kubelet[2101]: E0711 00:15:56.268755 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:56.269199 containerd[1445]: time="2025-07-11T00:15:56.269165264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7094ab302e11824a21151df432df851213807242df44b09250f16250987c9943\"" Jul 11 00:15:56.269751 kubelet[2101]: E0711 00:15:56.269717 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:56.272667 containerd[1445]: time="2025-07-11T00:15:56.272635196Z" level=info msg="CreateContainer within sandbox \"0da0a539a3b10558d0fc166242dfbd362db43e55b8f5263f2d70edf0fe558878\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:15:56.274003 containerd[1445]: time="2025-07-11T00:15:56.273968159Z" level=info msg="CreateContainer within sandbox \"3956f7936f29008b0b1869f1c62a105e478e0cd59dae5d3e61980645d63b02c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:15:56.275780 containerd[1445]: time="2025-07-11T00:15:56.275750156Z" level=info msg="CreateContainer within sandbox \"7094ab302e11824a21151df432df851213807242df44b09250f16250987c9943\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:15:56.287488 containerd[1445]: time="2025-07-11T00:15:56.287449169Z" level=info msg="CreateContainer within sandbox \"0da0a539a3b10558d0fc166242dfbd362db43e55b8f5263f2d70edf0fe558878\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c3ba14c0747733db9e12f6c2ead851a6df2608f77d673f265192c1a356f20d17\"" Jul 11 00:15:56.288281 containerd[1445]: time="2025-07-11T00:15:56.288228380Z" level=info msg="StartContainer for \"c3ba14c0747733db9e12f6c2ead851a6df2608f77d673f265192c1a356f20d17\"" Jul 11 00:15:56.290837 containerd[1445]: time="2025-07-11T00:15:56.290799322Z" level=info msg="CreateContainer within sandbox \"3956f7936f29008b0b1869f1c62a105e478e0cd59dae5d3e61980645d63b02c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"51e1c5e961256b046f28d777cc89fd45f324367e7389244a1175b2245d5e37f9\"" Jul 11 00:15:56.291584 containerd[1445]: time="2025-07-11T00:15:56.291390209Z" level=info msg="StartContainer for \"51e1c5e961256b046f28d777cc89fd45f324367e7389244a1175b2245d5e37f9\"" Jul 11 00:15:56.293368 containerd[1445]: time="2025-07-11T00:15:56.293320229Z" level=info msg="CreateContainer within sandbox \"7094ab302e11824a21151df432df851213807242df44b09250f16250987c9943\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf31158e347546a4910e155e82d24344532f1b376ab2b8421bb4df1a55a7beef\"" Jul 11 00:15:56.294093 containerd[1445]: time="2025-07-11T00:15:56.293961952Z" level=info msg="StartContainer for \"cf31158e347546a4910e155e82d24344532f1b376ab2b8421bb4df1a55a7beef\"" Jul 11 00:15:56.313041 systemd[1]: Started cri-containerd-c3ba14c0747733db9e12f6c2ead851a6df2608f77d673f265192c1a356f20d17.scope - libcontainer container c3ba14c0747733db9e12f6c2ead851a6df2608f77d673f265192c1a356f20d17. Jul 11 00:15:56.316207 systemd[1]: Started cri-containerd-51e1c5e961256b046f28d777cc89fd45f324367e7389244a1175b2245d5e37f9.scope - libcontainer container 51e1c5e961256b046f28d777cc89fd45f324367e7389244a1175b2245d5e37f9. Jul 11 00:15:56.317416 systemd[1]: Started cri-containerd-cf31158e347546a4910e155e82d24344532f1b376ab2b8421bb4df1a55a7beef.scope - libcontainer container cf31158e347546a4910e155e82d24344532f1b376ab2b8421bb4df1a55a7beef. Jul 11 00:15:56.354167 containerd[1445]: time="2025-07-11T00:15:56.354024976Z" level=info msg="StartContainer for \"c3ba14c0747733db9e12f6c2ead851a6df2608f77d673f265192c1a356f20d17\" returns successfully" Jul 11 00:15:56.358547 containerd[1445]: time="2025-07-11T00:15:56.358511956Z" level=info msg="StartContainer for \"cf31158e347546a4910e155e82d24344532f1b376ab2b8421bb4df1a55a7beef\" returns successfully" Jul 11 00:15:56.358835 containerd[1445]: time="2025-07-11T00:15:56.358523454Z" level=info msg="StartContainer for \"51e1c5e961256b046f28d777cc89fd45f324367e7389244a1175b2245d5e37f9\" returns successfully" Jul 11 00:15:56.359443 kubelet[2101]: E0711 00:15:56.359403 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="1.6s" Jul 11 00:15:56.459444 kubelet[2101]: E0711 00:15:56.459324 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:15:56.588344 kubelet[2101]: I0711 00:15:56.588310 2101 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:15:56.979275 kubelet[2101]: E0711 00:15:56.979235 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:15:56.979419 kubelet[2101]: E0711 00:15:56.979361 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:56.980145 kubelet[2101]: E0711 00:15:56.980115 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:15:56.980478 kubelet[2101]: E0711 00:15:56.980454 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:56.982550 kubelet[2101]: E0711 00:15:56.982526 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:15:56.982648 kubelet[2101]: E0711 00:15:56.982629 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:57.986810 kubelet[2101]: E0711 00:15:57.986771 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:15:57.987129 kubelet[2101]: E0711 00:15:57.986911 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:57.987177 kubelet[2101]: E0711 00:15:57.987133 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:15:57.987243 kubelet[2101]: E0711 00:15:57.987211 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:15:58.186160 kubelet[2101]: E0711 00:15:58.186124 2101 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:15:58.270971 kubelet[2101]: I0711 00:15:58.270933 2101 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:15:58.271083 kubelet[2101]: E0711 00:15:58.270977 2101 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:15:58.357836 kubelet[2101]: I0711 00:15:58.357785 2101 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:15:58.363383 kubelet[2101]: E0711 00:15:58.363091 2101 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:15:58.363383 kubelet[2101]: I0711 00:15:58.363119 2101 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:15:58.364705 kubelet[2101]: E0711 00:15:58.364668 2101 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:15:58.364705 kubelet[2101]: I0711 00:15:58.364696 2101 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:15:58.367252 kubelet[2101]: E0711 00:15:58.367221 2101 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:15:58.952249 kubelet[2101]: I0711 00:15:58.952192 2101 apiserver.go:52] "Watching apiserver" Jul 11 00:15:58.956794 kubelet[2101]: I0711 00:15:58.956753 2101 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:16:00.369747 systemd[1]: Reloading requested from client PID 2388 ('systemctl') (unit session-7.scope)... Jul 11 00:16:00.369764 systemd[1]: Reloading... Jul 11 00:16:00.435907 zram_generator::config[2429]: No configuration found. Jul 11 00:16:00.514117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:00.580294 systemd[1]: Reloading finished in 210 ms. Jul 11 00:16:00.611316 kubelet[2101]: I0711 00:16:00.611271 2101 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:16:00.611419 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:00.624980 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:16:00.625962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:00.636134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:00.733207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:00.737677 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:16:00.774728 kubelet[2469]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:16:00.774728 kubelet[2469]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:16:00.774728 kubelet[2469]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:16:00.775066 kubelet[2469]: I0711 00:16:00.774794 2469 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:16:00.783185 kubelet[2469]: I0711 00:16:00.783155 2469 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:16:00.783616 kubelet[2469]: I0711 00:16:00.783292 2469 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:16:00.783616 kubelet[2469]: I0711 00:16:00.783496 2469 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:16:00.785136 kubelet[2469]: I0711 00:16:00.785114 2469 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 11 00:16:00.787332 kubelet[2469]: I0711 00:16:00.787296 2469 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:16:00.792956 kubelet[2469]: E0711 00:16:00.792920 2469 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:16:00.792956 kubelet[2469]: I0711 00:16:00.792947 2469 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:16:00.795211 kubelet[2469]: I0711 00:16:00.795188 2469 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:16:00.795401 kubelet[2469]: I0711 00:16:00.795371 2469 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:16:00.795535 kubelet[2469]: I0711 00:16:00.795393 2469 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:16:00.795616 kubelet[2469]: I0711 00:16:00.795539 2469 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:16:00.795616 kubelet[2469]: I0711 00:16:00.795547 2469 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:16:00.795616 kubelet[2469]: I0711 00:16:00.795596 2469 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:16:00.795742 kubelet[2469]: I0711 00:16:00.795731 2469 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:16:00.795777 kubelet[2469]: I0711 00:16:00.795748 2469 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:16:00.795777 kubelet[2469]: I0711 00:16:00.795769 2469 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:16:00.795830 kubelet[2469]: I0711 00:16:00.795778 2469 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:16:00.796974 kubelet[2469]: I0711 00:16:00.796934 2469 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:16:00.797693 kubelet[2469]: I0711 00:16:00.797663 2469 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:16:00.800422 kubelet[2469]: I0711 00:16:00.800394 2469 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:16:00.800555 kubelet[2469]: I0711 00:16:00.800543 2469 server.go:1289] "Started kubelet" Jul 11 00:16:00.800935 kubelet[2469]: I0711 00:16:00.800870 2469 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:16:00.801135 kubelet[2469]: I0711 00:16:00.801114 2469 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:16:00.801270 kubelet[2469]: I0711 00:16:00.801242 2469 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:16:00.801890 kubelet[2469]: I0711 00:16:00.801855 2469 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:16:00.804069 kubelet[2469]: I0711 00:16:00.803491 2469 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:16:00.804069 kubelet[2469]: I0711 00:16:00.804002 2469 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:16:00.805293 kubelet[2469]: E0711 00:16:00.805267 2469 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:00.805359 kubelet[2469]: I0711 00:16:00.805299 2469 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:16:00.806097 kubelet[2469]: I0711 00:16:00.805461 2469 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:16:00.806097 kubelet[2469]: I0711 00:16:00.805586 2469 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:16:00.809139 kubelet[2469]: I0711 00:16:00.809113 2469 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:16:00.810039 kubelet[2469]: I0711 00:16:00.810012 2469 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:16:00.816016 kubelet[2469]: E0711 00:16:00.815987 2469 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:16:00.818920 kubelet[2469]: I0711 00:16:00.818898 2469 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:16:00.835536 kubelet[2469]: I0711 00:16:00.835499 2469 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:16:00.837867 kubelet[2469]: I0711 00:16:00.837823 2469 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:16:00.837867 kubelet[2469]: I0711 00:16:00.837843 2469 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:16:00.837867 kubelet[2469]: I0711 00:16:00.837860 2469 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:16:00.837867 kubelet[2469]: I0711 00:16:00.837869 2469 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:16:00.838045 kubelet[2469]: E0711 00:16:00.837938 2469 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:16:00.853204 kubelet[2469]: I0711 00:16:00.853179 2469 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:16:00.853204 kubelet[2469]: I0711 00:16:00.853200 2469 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:16:00.853336 kubelet[2469]: I0711 00:16:00.853220 2469 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:16:00.853360 kubelet[2469]: I0711 00:16:00.853348 2469 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:16:00.853382 kubelet[2469]: I0711 00:16:00.853359 2469 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:16:00.853382 kubelet[2469]: I0711 00:16:00.853375 2469 policy_none.go:49] "None policy: Start" Jul 11 00:16:00.853420 kubelet[2469]: I0711 00:16:00.853383 2469 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:16:00.853420 kubelet[2469]: I0711 00:16:00.853391 2469 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:16:00.853484 kubelet[2469]: I0711 00:16:00.853468 2469 state_mem.go:75] "Updated machine memory state" Jul 11 00:16:00.856640 kubelet[2469]: E0711 00:16:00.856608 2469 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:16:00.856900 kubelet[2469]: I0711 00:16:00.856766 2469 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:16:00.856900 kubelet[2469]: I0711 00:16:00.856828 2469 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:16:00.857048 kubelet[2469]: I0711 00:16:00.857021 2469 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:16:00.858229 kubelet[2469]: E0711 00:16:00.858209 2469 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:16:00.939098 kubelet[2469]: I0711 00:16:00.938999 2469 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:16:00.939157 kubelet[2469]: I0711 00:16:00.939107 2469 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:00.939306 kubelet[2469]: I0711 00:16:00.939002 2469 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:00.960729 kubelet[2469]: I0711 00:16:00.960698 2469 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:16:00.966393 kubelet[2469]: I0711 00:16:00.966273 2469 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:16:00.966393 kubelet[2469]: I0711 00:16:00.966353 2469 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:16:01.107073 kubelet[2469]: I0711 00:16:01.107031 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b30c5d92506be0ca592f2c2fd43d975f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b30c5d92506be0ca592f2c2fd43d975f\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:01.107073 kubelet[2469]: I0711 00:16:01.107067 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b30c5d92506be0ca592f2c2fd43d975f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b30c5d92506be0ca592f2c2fd43d975f\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:01.107261 kubelet[2469]: I0711 00:16:01.107087 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:01.107261 kubelet[2469]: I0711 00:16:01.107105 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:01.107261 kubelet[2469]: I0711 00:16:01.107127 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:01.107261 kubelet[2469]: I0711 00:16:01.107143 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b30c5d92506be0ca592f2c2fd43d975f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b30c5d92506be0ca592f2c2fd43d975f\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:01.107261 kubelet[2469]: I0711 00:16:01.107159 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:01.107378 kubelet[2469]: I0711 00:16:01.107183 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:01.107378 kubelet[2469]: I0711 00:16:01.107199 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:16:01.244171 kubelet[2469]: E0711 00:16:01.244048 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:01.245144 kubelet[2469]: E0711 00:16:01.245123 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:01.245254 kubelet[2469]: E0711 00:16:01.245236 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:01.796428 kubelet[2469]: I0711 00:16:01.796378 2469 apiserver.go:52] "Watching apiserver" Jul 11 00:16:01.806116 kubelet[2469]: I0711 00:16:01.806071 2469 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:16:01.846051 kubelet[2469]: E0711 00:16:01.845949 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:01.846051 kubelet[2469]: I0711 00:16:01.846049 2469 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:01.847080 kubelet[2469]: I0711 00:16:01.847022 2469 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:16:01.852979 kubelet[2469]: E0711 00:16:01.851798 2469 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:01.852979 kubelet[2469]: E0711 00:16:01.852021 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:01.853083 kubelet[2469]: E0711 00:16:01.852999 2469 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:16:01.853141 kubelet[2469]: E0711 00:16:01.853116 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:01.870204 kubelet[2469]: I0711 00:16:01.870125 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.870112098 podStartE2EDuration="1.870112098s" podCreationTimestamp="2025-07-11 00:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:01.863648154 +0000 UTC m=+1.121933177" watchObservedRunningTime="2025-07-11 00:16:01.870112098 +0000 UTC m=+1.128397121" Jul 11 00:16:01.877104 kubelet[2469]: I0711 00:16:01.877065 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.877054531 podStartE2EDuration="1.877054531s" podCreationTimestamp="2025-07-11 00:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:01.870257891 +0000 UTC m=+1.128542914" watchObservedRunningTime="2025-07-11 00:16:01.877054531 +0000 UTC m=+1.135339554" Jul 11 00:16:01.877801 kubelet[2469]: I0711 00:16:01.877140 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.877134593 podStartE2EDuration="1.877134593s" podCreationTimestamp="2025-07-11 00:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:01.87698828 +0000 UTC m=+1.135273303" watchObservedRunningTime="2025-07-11 00:16:01.877134593 +0000 UTC m=+1.135419616" Jul 11 00:16:02.847661 kubelet[2469]: E0711 00:16:02.847630 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:02.848002 kubelet[2469]: E0711 00:16:02.847718 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:03.848648 kubelet[2469]: E0711 00:16:03.848610 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:06.476946 kubelet[2469]: I0711 00:16:06.476912 2469 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:16:06.477365 containerd[1445]: time="2025-07-11T00:16:06.477205574Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:16:06.477560 kubelet[2469]: I0711 00:16:06.477389 2469 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:16:07.622869 systemd[1]: Created slice kubepods-besteffort-pod3d1d09aa_86e7_42ec_b56a_35b900ccdbbc.slice - libcontainer container kubepods-besteffort-pod3d1d09aa_86e7_42ec_b56a_35b900ccdbbc.slice. Jul 11 00:16:07.647364 kubelet[2469]: I0711 00:16:07.647319 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d1d09aa-86e7-42ec-b56a-35b900ccdbbc-xtables-lock\") pod \"kube-proxy-d8s7w\" (UID: \"3d1d09aa-86e7-42ec-b56a-35b900ccdbbc\") " pod="kube-system/kube-proxy-d8s7w" Jul 11 00:16:07.647364 kubelet[2469]: I0711 00:16:07.647359 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spv9k\" (UniqueName: \"kubernetes.io/projected/3d1d09aa-86e7-42ec-b56a-35b900ccdbbc-kube-api-access-spv9k\") pod \"kube-proxy-d8s7w\" (UID: \"3d1d09aa-86e7-42ec-b56a-35b900ccdbbc\") " pod="kube-system/kube-proxy-d8s7w" Jul 11 00:16:07.647703 kubelet[2469]: I0711 00:16:07.647387 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d1d09aa-86e7-42ec-b56a-35b900ccdbbc-kube-proxy\") pod \"kube-proxy-d8s7w\" (UID: \"3d1d09aa-86e7-42ec-b56a-35b900ccdbbc\") " pod="kube-system/kube-proxy-d8s7w" Jul 11 00:16:07.647703 kubelet[2469]: I0711 00:16:07.647405 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d1d09aa-86e7-42ec-b56a-35b900ccdbbc-lib-modules\") pod \"kube-proxy-d8s7w\" (UID: \"3d1d09aa-86e7-42ec-b56a-35b900ccdbbc\") " pod="kube-system/kube-proxy-d8s7w" Jul 11 00:16:07.743861 systemd[1]: Created slice kubepods-besteffort-pod00ecc9b3_4e9c_4de8_8108_0e992cc773e0.slice - libcontainer container kubepods-besteffort-pod00ecc9b3_4e9c_4de8_8108_0e992cc773e0.slice. Jul 11 00:16:07.748485 kubelet[2469]: I0711 00:16:07.748438 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00ecc9b3-4e9c-4de8-8108-0e992cc773e0-var-lib-calico\") pod \"tigera-operator-747864d56d-7nrkr\" (UID: \"00ecc9b3-4e9c-4de8-8108-0e992cc773e0\") " pod="tigera-operator/tigera-operator-747864d56d-7nrkr" Jul 11 00:16:07.748586 kubelet[2469]: I0711 00:16:07.748502 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpckh\" (UniqueName: \"kubernetes.io/projected/00ecc9b3-4e9c-4de8-8108-0e992cc773e0-kube-api-access-qpckh\") pod \"tigera-operator-747864d56d-7nrkr\" (UID: \"00ecc9b3-4e9c-4de8-8108-0e992cc773e0\") " pod="tigera-operator/tigera-operator-747864d56d-7nrkr" Jul 11 00:16:07.938296 kubelet[2469]: E0711 00:16:07.937638 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:07.938797 containerd[1445]: time="2025-07-11T00:16:07.938735882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d8s7w,Uid:3d1d09aa-86e7-42ec-b56a-35b900ccdbbc,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:07.957925 containerd[1445]: time="2025-07-11T00:16:07.957685037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:07.957925 containerd[1445]: time="2025-07-11T00:16:07.957748635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:07.957925 containerd[1445]: time="2025-07-11T00:16:07.957760082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:07.957925 containerd[1445]: time="2025-07-11T00:16:07.957830005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:07.978125 systemd[1]: Started cri-containerd-18364befbc1337b004a68b055008a01ded735ed2dcb598dfe33b043edcdbd54d.scope - libcontainer container 18364befbc1337b004a68b055008a01ded735ed2dcb598dfe33b043edcdbd54d. Jul 11 00:16:07.996037 containerd[1445]: time="2025-07-11T00:16:07.995998119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d8s7w,Uid:3d1d09aa-86e7-42ec-b56a-35b900ccdbbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"18364befbc1337b004a68b055008a01ded735ed2dcb598dfe33b043edcdbd54d\"" Jul 11 00:16:07.996800 kubelet[2469]: E0711 00:16:07.996775 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:08.000520 containerd[1445]: time="2025-07-11T00:16:08.000448623Z" level=info msg="CreateContainer within sandbox \"18364befbc1337b004a68b055008a01ded735ed2dcb598dfe33b043edcdbd54d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:16:08.018267 containerd[1445]: time="2025-07-11T00:16:08.018222128Z" level=info msg="CreateContainer within sandbox \"18364befbc1337b004a68b055008a01ded735ed2dcb598dfe33b043edcdbd54d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b5c26b098775eed2a79f47eeb1a466b2e274cd8d839845e7111d7ba6fe972552\"" Jul 11 00:16:08.018801 containerd[1445]: time="2025-07-11T00:16:08.018765400Z" level=info msg="StartContainer for \"b5c26b098775eed2a79f47eeb1a466b2e274cd8d839845e7111d7ba6fe972552\"" Jul 11 00:16:08.041047 systemd[1]: Started cri-containerd-b5c26b098775eed2a79f47eeb1a466b2e274cd8d839845e7111d7ba6fe972552.scope - libcontainer container b5c26b098775eed2a79f47eeb1a466b2e274cd8d839845e7111d7ba6fe972552. Jul 11 00:16:08.047718 containerd[1445]: time="2025-07-11T00:16:08.047676648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7nrkr,Uid:00ecc9b3-4e9c-4de8-8108-0e992cc773e0,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:16:08.068478 containerd[1445]: time="2025-07-11T00:16:08.068134120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:08.068478 containerd[1445]: time="2025-07-11T00:16:08.068220329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:08.068478 containerd[1445]: time="2025-07-11T00:16:08.068232856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:08.068478 containerd[1445]: time="2025-07-11T00:16:08.068330793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:08.068478 containerd[1445]: time="2025-07-11T00:16:08.068372817Z" level=info msg="StartContainer for \"b5c26b098775eed2a79f47eeb1a466b2e274cd8d839845e7111d7ba6fe972552\" returns successfully" Jul 11 00:16:08.093028 systemd[1]: Started cri-containerd-f2f2b6ebb2253e4b9d9c3bf588a54a9a6fddf6f0937f049f31309be2b01ee862.scope - libcontainer container f2f2b6ebb2253e4b9d9c3bf588a54a9a6fddf6f0937f049f31309be2b01ee862. Jul 11 00:16:08.121707 containerd[1445]: time="2025-07-11T00:16:08.121578221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7nrkr,Uid:00ecc9b3-4e9c-4de8-8108-0e992cc773e0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f2f2b6ebb2253e4b9d9c3bf588a54a9a6fddf6f0937f049f31309be2b01ee862\"" Jul 11 00:16:08.124232 containerd[1445]: time="2025-07-11T00:16:08.124183317Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:16:08.511492 kubelet[2469]: E0711 00:16:08.511390 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:08.856894 kubelet[2469]: E0711 00:16:08.856635 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:08.856894 kubelet[2469]: E0711 00:16:08.856633 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:08.874592 kubelet[2469]: I0711 00:16:08.874495 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d8s7w" podStartSLOduration=1.874480607 podStartE2EDuration="1.874480607s" podCreationTimestamp="2025-07-11 00:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:08.874289417 +0000 UTC m=+8.132574440" watchObservedRunningTime="2025-07-11 00:16:08.874480607 +0000 UTC m=+8.132765630" Jul 11 00:16:09.218996 kubelet[2469]: E0711 00:16:09.218732 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:09.472287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299278099.mount: Deactivated successfully. Jul 11 00:16:09.858290 kubelet[2469]: E0711 00:16:09.858256 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:10.321061 containerd[1445]: time="2025-07-11T00:16:10.321005723Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:10.321665 containerd[1445]: time="2025-07-11T00:16:10.321635767Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 11 00:16:10.322334 containerd[1445]: time="2025-07-11T00:16:10.322299868Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:10.324977 containerd[1445]: time="2025-07-11T00:16:10.324941666Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:10.326530 containerd[1445]: time="2025-07-11T00:16:10.326490983Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.202265482s" Jul 11 00:16:10.326577 containerd[1445]: time="2025-07-11T00:16:10.326531443Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 11 00:16:10.333308 containerd[1445]: time="2025-07-11T00:16:10.333243854Z" level=info msg="CreateContainer within sandbox \"f2f2b6ebb2253e4b9d9c3bf588a54a9a6fddf6f0937f049f31309be2b01ee862\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:16:10.342327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193369696.mount: Deactivated successfully. Jul 11 00:16:10.347043 containerd[1445]: time="2025-07-11T00:16:10.347006489Z" level=info msg="CreateContainer within sandbox \"f2f2b6ebb2253e4b9d9c3bf588a54a9a6fddf6f0937f049f31309be2b01ee862\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b61a8b96f0f273167a119024939ab15bbce072552e0e2cb119d6283e97fbbdde\"" Jul 11 00:16:10.347965 containerd[1445]: time="2025-07-11T00:16:10.347364473Z" level=info msg="StartContainer for \"b61a8b96f0f273167a119024939ab15bbce072552e0e2cb119d6283e97fbbdde\"" Jul 11 00:16:10.372647 systemd[1]: Started cri-containerd-b61a8b96f0f273167a119024939ab15bbce072552e0e2cb119d6283e97fbbdde.scope - libcontainer container b61a8b96f0f273167a119024939ab15bbce072552e0e2cb119d6283e97fbbdde. Jul 11 00:16:10.430442 containerd[1445]: time="2025-07-11T00:16:10.428165972Z" level=info msg="StartContainer for \"b61a8b96f0f273167a119024939ab15bbce072552e0e2cb119d6283e97fbbdde\" returns successfully" Jul 11 00:16:12.854058 kubelet[2469]: E0711 00:16:12.854001 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:12.880568 kubelet[2469]: E0711 00:16:12.877966 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:12.896041 kubelet[2469]: I0711 00:16:12.895982 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-7nrkr" podStartSLOduration=3.690127596 podStartE2EDuration="5.895958418s" podCreationTimestamp="2025-07-11 00:16:07 +0000 UTC" firstStartedPulling="2025-07-11 00:16:08.123767718 +0000 UTC m=+7.382052741" lastFinishedPulling="2025-07-11 00:16:10.32959854 +0000 UTC m=+9.587883563" observedRunningTime="2025-07-11 00:16:10.873555739 +0000 UTC m=+10.131840762" watchObservedRunningTime="2025-07-11 00:16:12.895958418 +0000 UTC m=+12.154243441" Jul 11 00:16:15.636662 sudo[1617]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:15.647570 sshd[1614]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:15.661753 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:56004.service: Deactivated successfully. Jul 11 00:16:15.664531 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:16:15.664714 systemd[1]: session-7.scope: Consumed 6.028s CPU time, 155.1M memory peak, 0B memory swap peak. Jul 11 00:16:15.665511 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:16:15.667540 systemd-logind[1421]: Removed session 7. Jul 11 00:16:17.106251 update_engine[1426]: I20250711 00:16:17.103904 1426 update_attempter.cc:509] Updating boot flags... Jul 11 00:16:17.162922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2886) Jul 11 00:16:21.264978 systemd[1]: Created slice kubepods-besteffort-podf83f56ae_eda9_4087_8ffc_21ef071e818f.slice - libcontainer container kubepods-besteffort-podf83f56ae_eda9_4087_8ffc_21ef071e818f.slice. Jul 11 00:16:21.340250 kubelet[2469]: I0711 00:16:21.340202 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f83f56ae-eda9-4087-8ffc-21ef071e818f-tigera-ca-bundle\") pod \"calico-typha-6fcb59ffd9-pcqp8\" (UID: \"f83f56ae-eda9-4087-8ffc-21ef071e818f\") " pod="calico-system/calico-typha-6fcb59ffd9-pcqp8" Jul 11 00:16:21.340250 kubelet[2469]: I0711 00:16:21.340246 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f83f56ae-eda9-4087-8ffc-21ef071e818f-typha-certs\") pod \"calico-typha-6fcb59ffd9-pcqp8\" (UID: \"f83f56ae-eda9-4087-8ffc-21ef071e818f\") " pod="calico-system/calico-typha-6fcb59ffd9-pcqp8" Jul 11 00:16:21.340250 kubelet[2469]: I0711 00:16:21.340266 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcfxh\" (UniqueName: \"kubernetes.io/projected/f83f56ae-eda9-4087-8ffc-21ef071e818f-kube-api-access-bcfxh\") pod \"calico-typha-6fcb59ffd9-pcqp8\" (UID: \"f83f56ae-eda9-4087-8ffc-21ef071e818f\") " pod="calico-system/calico-typha-6fcb59ffd9-pcqp8" Jul 11 00:16:21.553536 systemd[1]: Created slice kubepods-besteffort-pod4817c51c_c485_44f6_8a6f_2ecd96b5cb44.slice - libcontainer container kubepods-besteffort-pod4817c51c_c485_44f6_8a6f_2ecd96b5cb44.slice. Jul 11 00:16:21.571344 kubelet[2469]: E0711 00:16:21.571299 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:21.572756 containerd[1445]: time="2025-07-11T00:16:21.572714992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcb59ffd9-pcqp8,Uid:f83f56ae-eda9-4087-8ffc-21ef071e818f,Namespace:calico-system,Attempt:0,}" Jul 11 00:16:21.592445 containerd[1445]: time="2025-07-11T00:16:21.592281091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:21.593004 containerd[1445]: time="2025-07-11T00:16:21.592814727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:21.593004 containerd[1445]: time="2025-07-11T00:16:21.592839654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:21.593004 containerd[1445]: time="2025-07-11T00:16:21.592934521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:21.612119 systemd[1]: Started cri-containerd-d527a6fccf96578519cfb7d087b2d1dccb87102ba84644dea7855f111126aa32.scope - libcontainer container d527a6fccf96578519cfb7d087b2d1dccb87102ba84644dea7855f111126aa32. Jul 11 00:16:21.641198 containerd[1445]: time="2025-07-11T00:16:21.641154768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcb59ffd9-pcqp8,Uid:f83f56ae-eda9-4087-8ffc-21ef071e818f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d527a6fccf96578519cfb7d087b2d1dccb87102ba84644dea7855f111126aa32\"" Jul 11 00:16:21.641533 kubelet[2469]: I0711 00:16:21.641492 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-cni-bin-dir\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641533 kubelet[2469]: I0711 00:16:21.641532 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-cni-log-dir\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641622 kubelet[2469]: I0711 00:16:21.641550 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z42m\" (UniqueName: \"kubernetes.io/projected/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-kube-api-access-6z42m\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641622 kubelet[2469]: I0711 00:16:21.641568 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-flexvol-driver-host\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641622 kubelet[2469]: I0711 00:16:21.641582 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-node-certs\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641622 kubelet[2469]: I0711 00:16:21.641596 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-tigera-ca-bundle\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641622 kubelet[2469]: I0711 00:16:21.641610 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-var-lib-calico\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641732 kubelet[2469]: I0711 00:16:21.641622 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-xtables-lock\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641732 kubelet[2469]: I0711 00:16:21.641639 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-cni-net-dir\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641732 kubelet[2469]: I0711 00:16:21.641654 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-policysync\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641732 kubelet[2469]: I0711 00:16:21.641670 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-lib-modules\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.641732 kubelet[2469]: I0711 00:16:21.641683 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4817c51c-c485-44f6-8a6f-2ecd96b5cb44-var-run-calico\") pod \"calico-node-6qcgz\" (UID: \"4817c51c-c485-44f6-8a6f-2ecd96b5cb44\") " pod="calico-system/calico-node-6qcgz" Jul 11 00:16:21.642171 kubelet[2469]: E0711 00:16:21.642139 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:21.643438 containerd[1445]: time="2025-07-11T00:16:21.643405103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:16:21.751507 kubelet[2469]: E0711 00:16:21.751400 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.751507 kubelet[2469]: W0711 00:16:21.751421 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.752158 kubelet[2469]: E0711 00:16:21.752134 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.784095 kubelet[2469]: E0711 00:16:21.784054 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v292t" podUID="9fab7f82-393a-41e4-a999-9430044f6a22" Jul 11 00:16:21.839976 kubelet[2469]: E0711 00:16:21.839268 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.839976 kubelet[2469]: W0711 00:16:21.839288 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.839976 kubelet[2469]: E0711 00:16:21.839306 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.839976 kubelet[2469]: E0711 00:16:21.839494 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.843425 kubelet[2469]: W0711 00:16:21.839502 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.843475 kubelet[2469]: E0711 00:16:21.843428 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.843670 kubelet[2469]: E0711 00:16:21.843658 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.843670 kubelet[2469]: W0711 00:16:21.843670 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.843745 kubelet[2469]: E0711 00:16:21.843679 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.843922 kubelet[2469]: E0711 00:16:21.843911 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.843922 kubelet[2469]: W0711 00:16:21.843922 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.843980 kubelet[2469]: E0711 00:16:21.843929 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.844149 kubelet[2469]: E0711 00:16:21.844138 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.844149 kubelet[2469]: W0711 00:16:21.844148 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.844215 kubelet[2469]: E0711 00:16:21.844157 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.844320 kubelet[2469]: E0711 00:16:21.844310 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.844320 kubelet[2469]: W0711 00:16:21.844320 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.844377 kubelet[2469]: E0711 00:16:21.844327 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.844474 kubelet[2469]: E0711 00:16:21.844465 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.844474 kubelet[2469]: W0711 00:16:21.844474 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.844534 kubelet[2469]: E0711 00:16:21.844481 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.844635 kubelet[2469]: E0711 00:16:21.844626 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.844670 kubelet[2469]: W0711 00:16:21.844636 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.844670 kubelet[2469]: E0711 00:16:21.844643 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.844851 kubelet[2469]: E0711 00:16:21.844840 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.844851 kubelet[2469]: W0711 00:16:21.844850 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.844946 kubelet[2469]: E0711 00:16:21.844857 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.845062 kubelet[2469]: E0711 00:16:21.845052 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.845062 kubelet[2469]: W0711 00:16:21.845062 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.845116 kubelet[2469]: E0711 00:16:21.845070 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.845272 kubelet[2469]: E0711 00:16:21.845262 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.845272 kubelet[2469]: W0711 00:16:21.845272 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.845332 kubelet[2469]: E0711 00:16:21.845280 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.845427 kubelet[2469]: E0711 00:16:21.845418 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.845427 kubelet[2469]: W0711 00:16:21.845427 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.845482 kubelet[2469]: E0711 00:16:21.845435 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.845605 kubelet[2469]: E0711 00:16:21.845595 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.845636 kubelet[2469]: W0711 00:16:21.845605 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.845636 kubelet[2469]: E0711 00:16:21.845613 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.845745 kubelet[2469]: E0711 00:16:21.845736 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.845745 kubelet[2469]: W0711 00:16:21.845744 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.845803 kubelet[2469]: E0711 00:16:21.845761 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.845910 kubelet[2469]: E0711 00:16:21.845901 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.845910 kubelet[2469]: W0711 00:16:21.845910 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.845978 kubelet[2469]: E0711 00:16:21.845918 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.846079 kubelet[2469]: E0711 00:16:21.846069 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.846079 kubelet[2469]: W0711 00:16:21.846079 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.846136 kubelet[2469]: E0711 00:16:21.846088 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.846257 kubelet[2469]: E0711 00:16:21.846247 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.846257 kubelet[2469]: W0711 00:16:21.846257 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.846320 kubelet[2469]: E0711 00:16:21.846266 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.846404 kubelet[2469]: E0711 00:16:21.846395 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.846404 kubelet[2469]: W0711 00:16:21.846404 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.846464 kubelet[2469]: E0711 00:16:21.846411 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.846541 kubelet[2469]: E0711 00:16:21.846532 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.846568 kubelet[2469]: W0711 00:16:21.846541 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.846568 kubelet[2469]: E0711 00:16:21.846548 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.846887 kubelet[2469]: E0711 00:16:21.846671 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.846887 kubelet[2469]: W0711 00:16:21.846681 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.846887 kubelet[2469]: E0711 00:16:21.846688 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.846990 kubelet[2469]: E0711 00:16:21.846928 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.846990 kubelet[2469]: W0711 00:16:21.846936 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.846990 kubelet[2469]: E0711 00:16:21.846944 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.846990 kubelet[2469]: I0711 00:16:21.846965 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fab7f82-393a-41e4-a999-9430044f6a22-kubelet-dir\") pod \"csi-node-driver-v292t\" (UID: \"9fab7f82-393a-41e4-a999-9430044f6a22\") " pod="calico-system/csi-node-driver-v292t" Jul 11 00:16:21.847177 kubelet[2469]: E0711 00:16:21.847150 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.847177 kubelet[2469]: W0711 00:16:21.847172 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.847225 kubelet[2469]: E0711 00:16:21.847182 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.847225 kubelet[2469]: I0711 00:16:21.847203 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9fab7f82-393a-41e4-a999-9430044f6a22-socket-dir\") pod \"csi-node-driver-v292t\" (UID: \"9fab7f82-393a-41e4-a999-9430044f6a22\") " pod="calico-system/csi-node-driver-v292t" Jul 11 00:16:21.847365 kubelet[2469]: E0711 00:16:21.847350 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.847365 kubelet[2469]: W0711 00:16:21.847363 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.847502 kubelet[2469]: E0711 00:16:21.847374 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.847502 kubelet[2469]: I0711 00:16:21.847415 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9fab7f82-393a-41e4-a999-9430044f6a22-varrun\") pod \"csi-node-driver-v292t\" (UID: \"9fab7f82-393a-41e4-a999-9430044f6a22\") " pod="calico-system/csi-node-driver-v292t" Jul 11 00:16:21.847758 kubelet[2469]: E0711 00:16:21.847716 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.847758 kubelet[2469]: W0711 00:16:21.847733 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.847758 kubelet[2469]: E0711 00:16:21.847745 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.848152 kubelet[2469]: E0711 00:16:21.848043 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.848152 kubelet[2469]: W0711 00:16:21.848056 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.848152 kubelet[2469]: E0711 00:16:21.848065 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.848530 kubelet[2469]: E0711 00:16:21.848417 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.848530 kubelet[2469]: W0711 00:16:21.848434 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.848530 kubelet[2469]: E0711 00:16:21.848445 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.848684 kubelet[2469]: E0711 00:16:21.848672 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.848734 kubelet[2469]: W0711 00:16:21.848723 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.848783 kubelet[2469]: E0711 00:16:21.848773 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.849026 kubelet[2469]: E0711 00:16:21.849014 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.849225 kubelet[2469]: W0711 00:16:21.849112 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.849225 kubelet[2469]: E0711 00:16:21.849127 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.849225 kubelet[2469]: I0711 00:16:21.849192 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9fab7f82-393a-41e4-a999-9430044f6a22-registration-dir\") pod \"csi-node-driver-v292t\" (UID: \"9fab7f82-393a-41e4-a999-9430044f6a22\") " pod="calico-system/csi-node-driver-v292t" Jul 11 00:16:21.849499 kubelet[2469]: E0711 00:16:21.849486 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.849593 kubelet[2469]: W0711 00:16:21.849554 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.849593 kubelet[2469]: E0711 00:16:21.849567 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.850026 kubelet[2469]: E0711 00:16:21.849962 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.850026 kubelet[2469]: W0711 00:16:21.849976 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.850026 kubelet[2469]: E0711 00:16:21.849986 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.850223 kubelet[2469]: I0711 00:16:21.850123 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzm6\" (UniqueName: \"kubernetes.io/projected/9fab7f82-393a-41e4-a999-9430044f6a22-kube-api-access-xjzm6\") pod \"csi-node-driver-v292t\" (UID: \"9fab7f82-393a-41e4-a999-9430044f6a22\") " pod="calico-system/csi-node-driver-v292t" Jul 11 00:16:21.850461 kubelet[2469]: E0711 00:16:21.850447 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.850557 kubelet[2469]: W0711 00:16:21.850530 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.850557 kubelet[2469]: E0711 00:16:21.850545 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.850906 kubelet[2469]: E0711 00:16:21.850891 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.851108 kubelet[2469]: W0711 00:16:21.850975 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.851108 kubelet[2469]: E0711 00:16:21.850991 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.851452 kubelet[2469]: E0711 00:16:21.851347 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.851452 kubelet[2469]: W0711 00:16:21.851361 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.851452 kubelet[2469]: E0711 00:16:21.851371 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.851623 kubelet[2469]: E0711 00:16:21.851610 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.852030 kubelet[2469]: W0711 00:16:21.851963 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.852030 kubelet[2469]: E0711 00:16:21.851986 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.852474 kubelet[2469]: E0711 00:16:21.852440 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.852474 kubelet[2469]: W0711 00:16:21.852462 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.852525 kubelet[2469]: E0711 00:16:21.852475 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.856590 containerd[1445]: time="2025-07-11T00:16:21.856491775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qcgz,Uid:4817c51c-c485-44f6-8a6f-2ecd96b5cb44,Namespace:calico-system,Attempt:0,}" Jul 11 00:16:21.880820 containerd[1445]: time="2025-07-11T00:16:21.878806115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:21.880820 containerd[1445]: time="2025-07-11T00:16:21.878858090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:21.880820 containerd[1445]: time="2025-07-11T00:16:21.878916427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:21.880820 containerd[1445]: time="2025-07-11T00:16:21.879765314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:21.906652 systemd[1]: Started cri-containerd-76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270.scope - libcontainer container 76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270. Jul 11 00:16:21.930078 containerd[1445]: time="2025-07-11T00:16:21.930035558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qcgz,Uid:4817c51c-c485-44f6-8a6f-2ecd96b5cb44,Namespace:calico-system,Attempt:0,} returns sandbox id \"76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270\"" Jul 11 00:16:21.952840 kubelet[2469]: E0711 00:16:21.952816 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.952840 kubelet[2469]: W0711 00:16:21.952836 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.953015 kubelet[2469]: E0711 00:16:21.952854 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.953081 kubelet[2469]: E0711 00:16:21.953064 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.953081 kubelet[2469]: W0711 00:16:21.953080 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.953135 kubelet[2469]: E0711 00:16:21.953090 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.953375 kubelet[2469]: E0711 00:16:21.953354 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.953375 kubelet[2469]: W0711 00:16:21.953366 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.953375 kubelet[2469]: E0711 00:16:21.953375 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.953610 kubelet[2469]: E0711 00:16:21.953594 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.953610 kubelet[2469]: W0711 00:16:21.953606 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.953679 kubelet[2469]: E0711 00:16:21.953615 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.953812 kubelet[2469]: E0711 00:16:21.953800 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.953812 kubelet[2469]: W0711 00:16:21.953812 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.953889 kubelet[2469]: E0711 00:16:21.953820 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.954001 kubelet[2469]: E0711 00:16:21.953984 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.954001 kubelet[2469]: W0711 00:16:21.954000 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.954069 kubelet[2469]: E0711 00:16:21.954009 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.954171 kubelet[2469]: E0711 00:16:21.954159 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.954171 kubelet[2469]: W0711 00:16:21.954171 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.954233 kubelet[2469]: E0711 00:16:21.954180 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.955268 kubelet[2469]: E0711 00:16:21.955253 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.955309 kubelet[2469]: W0711 00:16:21.955268 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.955309 kubelet[2469]: E0711 00:16:21.955279 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.955552 kubelet[2469]: E0711 00:16:21.955540 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.955588 kubelet[2469]: W0711 00:16:21.955555 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.955588 kubelet[2469]: E0711 00:16:21.955565 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.955746 kubelet[2469]: E0711 00:16:21.955731 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.955746 kubelet[2469]: W0711 00:16:21.955741 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.955816 kubelet[2469]: E0711 00:16:21.955750 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.955954 kubelet[2469]: E0711 00:16:21.955940 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.955954 kubelet[2469]: W0711 00:16:21.955950 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.956043 kubelet[2469]: E0711 00:16:21.955959 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.956145 kubelet[2469]: E0711 00:16:21.956131 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.956145 kubelet[2469]: W0711 00:16:21.956141 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.956209 kubelet[2469]: E0711 00:16:21.956150 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.956361 kubelet[2469]: E0711 00:16:21.956336 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.956361 kubelet[2469]: W0711 00:16:21.956347 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.956361 kubelet[2469]: E0711 00:16:21.956354 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.956503 kubelet[2469]: E0711 00:16:21.956498 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.956531 kubelet[2469]: W0711 00:16:21.956504 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.956531 kubelet[2469]: E0711 00:16:21.956513 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.956827 kubelet[2469]: E0711 00:16:21.956802 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.956827 kubelet[2469]: W0711 00:16:21.956814 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.956827 kubelet[2469]: E0711 00:16:21.956822 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.957283 kubelet[2469]: E0711 00:16:21.957186 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.957283 kubelet[2469]: W0711 00:16:21.957201 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.957283 kubelet[2469]: E0711 00:16:21.957214 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.957519 kubelet[2469]: E0711 00:16:21.957422 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.957519 kubelet[2469]: W0711 00:16:21.957433 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.957519 kubelet[2469]: E0711 00:16:21.957442 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.957664 kubelet[2469]: E0711 00:16:21.957653 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.957713 kubelet[2469]: W0711 00:16:21.957703 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.957767 kubelet[2469]: E0711 00:16:21.957757 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.958108 kubelet[2469]: E0711 00:16:21.957987 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.958108 kubelet[2469]: W0711 00:16:21.957999 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.958108 kubelet[2469]: E0711 00:16:21.958008 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.958270 kubelet[2469]: E0711 00:16:21.958256 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.958324 kubelet[2469]: W0711 00:16:21.958314 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.958383 kubelet[2469]: E0711 00:16:21.958370 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.958687 kubelet[2469]: E0711 00:16:21.958591 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.958687 kubelet[2469]: W0711 00:16:21.958603 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.958687 kubelet[2469]: E0711 00:16:21.958612 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.958906 kubelet[2469]: E0711 00:16:21.958893 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.959099 kubelet[2469]: W0711 00:16:21.958958 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.959099 kubelet[2469]: E0711 00:16:21.958973 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.959216 kubelet[2469]: E0711 00:16:21.959203 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.959279 kubelet[2469]: W0711 00:16:21.959267 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.959335 kubelet[2469]: E0711 00:16:21.959325 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.959551 kubelet[2469]: E0711 00:16:21.959538 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.959673 kubelet[2469]: W0711 00:16:21.959608 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.959673 kubelet[2469]: E0711 00:16:21.959623 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.960058 kubelet[2469]: E0711 00:16:21.960041 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.960172 kubelet[2469]: W0711 00:16:21.960131 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.960172 kubelet[2469]: E0711 00:16:21.960149 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:21.968156 kubelet[2469]: E0711 00:16:21.968101 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:21.968156 kubelet[2469]: W0711 00:16:21.968116 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:21.968156 kubelet[2469]: E0711 00:16:21.968127 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:22.634970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969480841.mount: Deactivated successfully. Jul 11 00:16:23.359118 containerd[1445]: time="2025-07-11T00:16:23.359071768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:23.359747 containerd[1445]: time="2025-07-11T00:16:23.359693373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 11 00:16:23.360624 containerd[1445]: time="2025-07-11T00:16:23.360586130Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:23.363213 containerd[1445]: time="2025-07-11T00:16:23.363180298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:23.364204 containerd[1445]: time="2025-07-11T00:16:23.364164839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.720722405s" Jul 11 00:16:23.364204 containerd[1445]: time="2025-07-11T00:16:23.364200809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 11 00:16:23.365807 containerd[1445]: time="2025-07-11T00:16:23.365504394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:16:23.380563 containerd[1445]: time="2025-07-11T00:16:23.380519416Z" level=info msg="CreateContainer within sandbox \"d527a6fccf96578519cfb7d087b2d1dccb87102ba84644dea7855f111126aa32\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:16:23.391932 containerd[1445]: time="2025-07-11T00:16:23.391892072Z" level=info msg="CreateContainer within sandbox \"d527a6fccf96578519cfb7d087b2d1dccb87102ba84644dea7855f111126aa32\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fcaf5e20758dfe788665dfc49d2169225af28da01cc9c2e535ca48bb34af8b36\"" Jul 11 00:16:23.392670 containerd[1445]: time="2025-07-11T00:16:23.392405968Z" level=info msg="StartContainer for \"fcaf5e20758dfe788665dfc49d2169225af28da01cc9c2e535ca48bb34af8b36\"" Jul 11 00:16:23.422108 systemd[1]: Started cri-containerd-fcaf5e20758dfe788665dfc49d2169225af28da01cc9c2e535ca48bb34af8b36.scope - libcontainer container fcaf5e20758dfe788665dfc49d2169225af28da01cc9c2e535ca48bb34af8b36. Jul 11 00:16:23.497667 containerd[1445]: time="2025-07-11T00:16:23.497518684Z" level=info msg="StartContainer for \"fcaf5e20758dfe788665dfc49d2169225af28da01cc9c2e535ca48bb34af8b36\" returns successfully" Jul 11 00:16:23.839435 kubelet[2469]: E0711 00:16:23.839104 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v292t" podUID="9fab7f82-393a-41e4-a999-9430044f6a22" Jul 11 00:16:23.914814 kubelet[2469]: E0711 00:16:23.914689 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:23.931919 kubelet[2469]: I0711 00:16:23.931474 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fcb59ffd9-pcqp8" podStartSLOduration=1.209016406 podStartE2EDuration="2.93145108s" podCreationTimestamp="2025-07-11 00:16:21 +0000 UTC" firstStartedPulling="2025-07-11 00:16:21.642853263 +0000 UTC m=+20.901138246" lastFinishedPulling="2025-07-11 00:16:23.365287897 +0000 UTC m=+22.623572920" observedRunningTime="2025-07-11 00:16:23.930936143 +0000 UTC m=+23.189221166" watchObservedRunningTime="2025-07-11 00:16:23.93145108 +0000 UTC m=+23.189736103" Jul 11 00:16:23.964021 kubelet[2469]: E0711 00:16:23.962819 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.964021 kubelet[2469]: W0711 00:16:23.962984 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.964021 kubelet[2469]: E0711 00:16:23.963014 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.964021 kubelet[2469]: E0711 00:16:23.963483 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.966603 kubelet[2469]: W0711 00:16:23.963498 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.966603 kubelet[2469]: E0711 00:16:23.966469 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.967786 kubelet[2469]: E0711 00:16:23.967004 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.967786 kubelet[2469]: W0711 00:16:23.967022 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.967786 kubelet[2469]: E0711 00:16:23.967039 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.967786 kubelet[2469]: E0711 00:16:23.967435 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.967786 kubelet[2469]: W0711 00:16:23.967448 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.967786 kubelet[2469]: E0711 00:16:23.967460 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.969062 kubelet[2469]: E0711 00:16:23.969023 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.969062 kubelet[2469]: W0711 00:16:23.969052 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.969161 kubelet[2469]: E0711 00:16:23.969073 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.973244 kubelet[2469]: E0711 00:16:23.973198 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.973244 kubelet[2469]: W0711 00:16:23.973229 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.973404 kubelet[2469]: E0711 00:16:23.973256 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.973674 kubelet[2469]: E0711 00:16:23.973655 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.973722 kubelet[2469]: W0711 00:16:23.973674 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.973722 kubelet[2469]: E0711 00:16:23.973688 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.973998 kubelet[2469]: E0711 00:16:23.973981 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.973998 kubelet[2469]: W0711 00:16:23.973996 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.974086 kubelet[2469]: E0711 00:16:23.974007 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.974288 kubelet[2469]: E0711 00:16:23.974274 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.974335 kubelet[2469]: W0711 00:16:23.974289 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.974335 kubelet[2469]: E0711 00:16:23.974299 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.974519 kubelet[2469]: E0711 00:16:23.974506 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.974554 kubelet[2469]: W0711 00:16:23.974521 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.974554 kubelet[2469]: E0711 00:16:23.974531 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.974756 kubelet[2469]: E0711 00:16:23.974742 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.974756 kubelet[2469]: W0711 00:16:23.974755 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.974812 kubelet[2469]: E0711 00:16:23.974764 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.974996 kubelet[2469]: E0711 00:16:23.974981 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.974996 kubelet[2469]: W0711 00:16:23.974995 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.975061 kubelet[2469]: E0711 00:16:23.975004 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.975441 kubelet[2469]: E0711 00:16:23.975284 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.975441 kubelet[2469]: W0711 00:16:23.975304 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.975441 kubelet[2469]: E0711 00:16:23.975321 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.975648 kubelet[2469]: E0711 00:16:23.975634 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.975717 kubelet[2469]: W0711 00:16:23.975705 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.975776 kubelet[2469]: E0711 00:16:23.975765 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.976078 kubelet[2469]: E0711 00:16:23.976063 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.976145 kubelet[2469]: W0711 00:16:23.976133 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.976449 kubelet[2469]: E0711 00:16:23.976203 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.976635 kubelet[2469]: E0711 00:16:23.976617 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.976705 kubelet[2469]: W0711 00:16:23.976690 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.976778 kubelet[2469]: E0711 00:16:23.976765 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.977197 kubelet[2469]: E0711 00:16:23.977176 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.977294 kubelet[2469]: W0711 00:16:23.977277 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.977364 kubelet[2469]: E0711 00:16:23.977350 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.977788 kubelet[2469]: E0711 00:16:23.977774 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.977954 kubelet[2469]: W0711 00:16:23.977854 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.977954 kubelet[2469]: E0711 00:16:23.977871 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.978279 kubelet[2469]: E0711 00:16:23.978254 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.978317 kubelet[2469]: W0711 00:16:23.978280 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.978317 kubelet[2469]: E0711 00:16:23.978297 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.978551 kubelet[2469]: E0711 00:16:23.978535 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.978551 kubelet[2469]: W0711 00:16:23.978550 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.978643 kubelet[2469]: E0711 00:16:23.978561 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.978845 kubelet[2469]: E0711 00:16:23.978832 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.978845 kubelet[2469]: W0711 00:16:23.978845 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.978944 kubelet[2469]: E0711 00:16:23.978856 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.979134 kubelet[2469]: E0711 00:16:23.979120 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.979168 kubelet[2469]: W0711 00:16:23.979134 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.979168 kubelet[2469]: E0711 00:16:23.979146 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.979407 kubelet[2469]: E0711 00:16:23.979393 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.979444 kubelet[2469]: W0711 00:16:23.979407 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.979444 kubelet[2469]: E0711 00:16:23.979419 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.979696 kubelet[2469]: E0711 00:16:23.979681 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.979729 kubelet[2469]: W0711 00:16:23.979697 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.979729 kubelet[2469]: E0711 00:16:23.979709 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.981319 kubelet[2469]: E0711 00:16:23.980236 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.981319 kubelet[2469]: W0711 00:16:23.980261 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.981319 kubelet[2469]: E0711 00:16:23.980281 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.981319 kubelet[2469]: E0711 00:16:23.980571 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.981319 kubelet[2469]: W0711 00:16:23.980609 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.981319 kubelet[2469]: E0711 00:16:23.980624 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.981319 kubelet[2469]: E0711 00:16:23.980923 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.981319 kubelet[2469]: W0711 00:16:23.980937 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.981319 kubelet[2469]: E0711 00:16:23.980948 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.981612 kubelet[2469]: E0711 00:16:23.981413 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.981612 kubelet[2469]: W0711 00:16:23.981426 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.981612 kubelet[2469]: E0711 00:16:23.981438 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.981714 kubelet[2469]: E0711 00:16:23.981685 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.981714 kubelet[2469]: W0711 00:16:23.981702 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.981714 kubelet[2469]: E0711 00:16:23.981712 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.981965 kubelet[2469]: E0711 00:16:23.981949 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.981965 kubelet[2469]: W0711 00:16:23.981963 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.982043 kubelet[2469]: E0711 00:16:23.981974 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.982211 kubelet[2469]: E0711 00:16:23.982197 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.982211 kubelet[2469]: W0711 00:16:23.982210 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.982265 kubelet[2469]: E0711 00:16:23.982219 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.982570 kubelet[2469]: E0711 00:16:23.982555 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.982570 kubelet[2469]: W0711 00:16:23.982568 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.982646 kubelet[2469]: E0711 00:16:23.982590 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:23.983234 kubelet[2469]: E0711 00:16:23.983215 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:16:23.983234 kubelet[2469]: W0711 00:16:23.983233 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:16:23.983343 kubelet[2469]: E0711 00:16:23.983246 2469 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:16:24.465783 containerd[1445]: time="2025-07-11T00:16:24.465716032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:24.467233 containerd[1445]: time="2025-07-11T00:16:24.467186604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 11 00:16:24.469735 containerd[1445]: time="2025-07-11T00:16:24.468117160Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:24.470746 containerd[1445]: time="2025-07-11T00:16:24.470714018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:24.471625 containerd[1445]: time="2025-07-11T00:16:24.471346658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.105806014s" Jul 11 00:16:24.471625 containerd[1445]: time="2025-07-11T00:16:24.471377266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 11 00:16:24.479085 containerd[1445]: time="2025-07-11T00:16:24.479046329Z" level=info msg="CreateContainer within sandbox \"76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:16:24.505602 containerd[1445]: time="2025-07-11T00:16:24.505532679Z" level=info msg="CreateContainer within sandbox \"76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5\"" Jul 11 00:16:24.506586 containerd[1445]: time="2025-07-11T00:16:24.506460274Z" level=info msg="StartContainer for \"94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5\"" Jul 11 00:16:24.542146 systemd[1]: Started cri-containerd-94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5.scope - libcontainer container 94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5. Jul 11 00:16:24.584589 containerd[1445]: time="2025-07-11T00:16:24.584509488Z" level=info msg="StartContainer for \"94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5\" returns successfully" Jul 11 00:16:24.597069 systemd[1]: cri-containerd-94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5.scope: Deactivated successfully. Jul 11 00:16:24.625365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5-rootfs.mount: Deactivated successfully. Jul 11 00:16:24.658086 containerd[1445]: time="2025-07-11T00:16:24.652301304Z" level=info msg="shim disconnected" id=94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5 namespace=k8s.io Jul 11 00:16:24.658086 containerd[1445]: time="2025-07-11T00:16:24.658083168Z" level=warning msg="cleaning up after shim disconnected" id=94882be807afcd5f26719a546048e65e4e3c9478543430c6052ca342505236f5 namespace=k8s.io Jul 11 00:16:24.658308 containerd[1445]: time="2025-07-11T00:16:24.658101573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:16:24.918582 kubelet[2469]: I0711 00:16:24.918450 2469 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:16:24.923851 kubelet[2469]: E0711 00:16:24.920782 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:24.923985 containerd[1445]: time="2025-07-11T00:16:24.921120250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:16:25.839187 kubelet[2469]: E0711 00:16:25.839130 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v292t" podUID="9fab7f82-393a-41e4-a999-9430044f6a22" Jul 11 00:16:27.820217 containerd[1445]: time="2025-07-11T00:16:27.819493397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:27.820217 containerd[1445]: time="2025-07-11T00:16:27.819990467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 11 00:16:27.820823 containerd[1445]: time="2025-07-11T00:16:27.820777082Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:27.823338 containerd[1445]: time="2025-07-11T00:16:27.823306724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:27.824177 containerd[1445]: time="2025-07-11T00:16:27.824139229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.902967606s" Jul 11 00:16:27.824221 containerd[1445]: time="2025-07-11T00:16:27.824172516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 11 00:16:27.828242 containerd[1445]: time="2025-07-11T00:16:27.828212093Z" level=info msg="CreateContainer within sandbox \"76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:16:27.844006 kubelet[2469]: E0711 00:16:27.838711 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v292t" podUID="9fab7f82-393a-41e4-a999-9430044f6a22" Jul 11 00:16:27.934535 containerd[1445]: time="2025-07-11T00:16:27.934403517Z" level=info msg="CreateContainer within sandbox \"76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676\"" Jul 11 00:16:27.935256 containerd[1445]: time="2025-07-11T00:16:27.935053421Z" level=info msg="StartContainer for \"715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676\"" Jul 11 00:16:27.975090 systemd[1]: Started cri-containerd-715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676.scope - libcontainer container 715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676. Jul 11 00:16:28.034188 containerd[1445]: time="2025-07-11T00:16:28.034131521Z" level=info msg="StartContainer for \"715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676\" returns successfully" Jul 11 00:16:29.497807 systemd[1]: cri-containerd-715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676.scope: Deactivated successfully. Jul 11 00:16:29.514454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676-rootfs.mount: Deactivated successfully. Jul 11 00:16:29.525080 kubelet[2469]: I0711 00:16:29.524924 2469 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:16:29.759088 containerd[1445]: time="2025-07-11T00:16:29.759024560Z" level=info msg="shim disconnected" id=715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676 namespace=k8s.io Jul 11 00:16:29.759088 containerd[1445]: time="2025-07-11T00:16:29.759086893Z" level=warning msg="cleaning up after shim disconnected" id=715f8be3006bb817c3fe626092b548e5a77f36e682fd7d6bb77cd2b5b3947676 namespace=k8s.io Jul 11 00:16:29.759088 containerd[1445]: time="2025-07-11T00:16:29.759095695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:16:29.859381 systemd[1]: Created slice kubepods-besteffort-podf6ab768e_fd1a_4783_bee2_e85ef4dae0dc.slice - libcontainer container kubepods-besteffort-podf6ab768e_fd1a_4783_bee2_e85ef4dae0dc.slice. Jul 11 00:16:29.878457 kubelet[2469]: I0711 00:16:29.878405 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vjh\" (UniqueName: \"kubernetes.io/projected/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc-kube-api-access-24vjh\") pod \"calico-apiserver-6f9544685f-czxjh\" (UID: \"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc\") " pod="calico-apiserver/calico-apiserver-6f9544685f-czxjh" Jul 11 00:16:29.878457 kubelet[2469]: I0711 00:16:29.878455 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc-calico-apiserver-certs\") pod \"calico-apiserver-6f9544685f-czxjh\" (UID: \"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc\") " pod="calico-apiserver/calico-apiserver-6f9544685f-czxjh" Jul 11 00:16:29.888592 systemd[1]: Created slice kubepods-besteffort-pod9fab7f82_393a_41e4_a999_9430044f6a22.slice - libcontainer container kubepods-besteffort-pod9fab7f82_393a_41e4_a999_9430044f6a22.slice. Jul 11 00:16:29.891466 containerd[1445]: time="2025-07-11T00:16:29.891426538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v292t,Uid:9fab7f82-393a-41e4-a999-9430044f6a22,Namespace:calico-system,Attempt:0,}" Jul 11 00:16:29.892930 systemd[1]: Created slice kubepods-besteffort-podb8e35104_a845_48f9_8ad5_cc498d1edd3f.slice - libcontainer container kubepods-besteffort-podb8e35104_a845_48f9_8ad5_cc498d1edd3f.slice. Jul 11 00:16:29.943768 systemd[1]: Created slice kubepods-besteffort-pod58c7686e_2053_4b0c_9e02_052b8ed1eb7b.slice - libcontainer container kubepods-besteffort-pod58c7686e_2053_4b0c_9e02_052b8ed1eb7b.slice. Jul 11 00:16:29.954802 systemd[1]: Created slice kubepods-besteffort-poda38583e9_7b43_4c77_8995_9dc39bf3123b.slice - libcontainer container kubepods-besteffort-poda38583e9_7b43_4c77_8995_9dc39bf3123b.slice. Jul 11 00:16:29.968102 containerd[1445]: time="2025-07-11T00:16:29.968066641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:16:29.978267 systemd[1]: Created slice kubepods-besteffort-pode396f44e_1f83_4d6d_a81a_6662c419d2df.slice - libcontainer container kubepods-besteffort-pode396f44e_1f83_4d6d_a81a_6662c419d2df.slice. Jul 11 00:16:29.983951 kubelet[2469]: I0711 00:16:29.983438 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96qfr\" (UniqueName: \"kubernetes.io/projected/58c7686e-2053-4b0c-9e02-052b8ed1eb7b-kube-api-access-96qfr\") pod \"goldmane-768f4c5c69-lf8r5\" (UID: \"58c7686e-2053-4b0c-9e02-052b8ed1eb7b\") " pod="calico-system/goldmane-768f4c5c69-lf8r5" Jul 11 00:16:29.987442 kubelet[2469]: I0711 00:16:29.985555 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58c7686e-2053-4b0c-9e02-052b8ed1eb7b-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-lf8r5\" (UID: \"58c7686e-2053-4b0c-9e02-052b8ed1eb7b\") " pod="calico-system/goldmane-768f4c5c69-lf8r5" Jul 11 00:16:29.987442 kubelet[2469]: I0711 00:16:29.985584 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/58c7686e-2053-4b0c-9e02-052b8ed1eb7b-goldmane-key-pair\") pod \"goldmane-768f4c5c69-lf8r5\" (UID: \"58c7686e-2053-4b0c-9e02-052b8ed1eb7b\") " pod="calico-system/goldmane-768f4c5c69-lf8r5" Jul 11 00:16:29.987442 kubelet[2469]: I0711 00:16:29.985604 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58c7686e-2053-4b0c-9e02-052b8ed1eb7b-config\") pod \"goldmane-768f4c5c69-lf8r5\" (UID: \"58c7686e-2053-4b0c-9e02-052b8ed1eb7b\") " pod="calico-system/goldmane-768f4c5c69-lf8r5" Jul 11 00:16:29.990671 systemd[1]: Created slice kubepods-burstable-pod67c0bfe2_c5c5_48ac_a593_483d9d147ed4.slice - libcontainer container kubepods-burstable-pod67c0bfe2_c5c5_48ac_a593_483d9d147ed4.slice. Jul 11 00:16:30.006162 systemd[1]: Created slice kubepods-besteffort-podbe191a89_0427_4456_b3ac_3a29c67d84d3.slice - libcontainer container kubepods-besteffort-podbe191a89_0427_4456_b3ac_3a29c67d84d3.slice. Jul 11 00:16:30.020276 systemd[1]: Created slice kubepods-burstable-pod52892df7_6e82_4fd1_8c85_d93129166596.slice - libcontainer container kubepods-burstable-pod52892df7_6e82_4fd1_8c85_d93129166596.slice. Jul 11 00:16:30.086612 kubelet[2469]: I0711 00:16:30.086571 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e396f44e-1f83-4d6d-a81a-6662c419d2df-calico-apiserver-certs\") pod \"calico-apiserver-6f9544685f-msjqm\" (UID: \"e396f44e-1f83-4d6d-a81a-6662c419d2df\") " pod="calico-apiserver/calico-apiserver-6f9544685f-msjqm" Jul 11 00:16:30.086612 kubelet[2469]: I0711 00:16:30.086617 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c0bfe2-c5c5-48ac-a593-483d9d147ed4-config-volume\") pod \"coredns-674b8bbfcf-fxqzj\" (UID: \"67c0bfe2-c5c5-48ac-a593-483d9d147ed4\") " pod="kube-system/coredns-674b8bbfcf-fxqzj" Jul 11 00:16:30.086776 kubelet[2469]: I0711 00:16:30.086672 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a38583e9-7b43-4c77-8995-9dc39bf3123b-tigera-ca-bundle\") pod \"calico-kube-controllers-5f5d7d7856-pzcbk\" (UID: \"a38583e9-7b43-4c77-8995-9dc39bf3123b\") " pod="calico-system/calico-kube-controllers-5f5d7d7856-pzcbk" Jul 11 00:16:30.086776 kubelet[2469]: I0711 00:16:30.086750 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skqjd\" (UniqueName: \"kubernetes.io/projected/be191a89-0427-4456-b3ac-3a29c67d84d3-kube-api-access-skqjd\") pod \"whisker-7d7596b9b4-8vr76\" (UID: \"be191a89-0427-4456-b3ac-3a29c67d84d3\") " pod="calico-system/whisker-7d7596b9b4-8vr76" Jul 11 00:16:30.086776 kubelet[2469]: I0711 00:16:30.086769 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tkbf\" (UniqueName: \"kubernetes.io/projected/67c0bfe2-c5c5-48ac-a593-483d9d147ed4-kube-api-access-2tkbf\") pod \"coredns-674b8bbfcf-fxqzj\" (UID: \"67c0bfe2-c5c5-48ac-a593-483d9d147ed4\") " pod="kube-system/coredns-674b8bbfcf-fxqzj" Jul 11 00:16:30.086851 kubelet[2469]: I0711 00:16:30.086786 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4wmf\" (UniqueName: \"kubernetes.io/projected/a38583e9-7b43-4c77-8995-9dc39bf3123b-kube-api-access-t4wmf\") pod \"calico-kube-controllers-5f5d7d7856-pzcbk\" (UID: \"a38583e9-7b43-4c77-8995-9dc39bf3123b\") " pod="calico-system/calico-kube-controllers-5f5d7d7856-pzcbk" Jul 11 00:16:30.086851 kubelet[2469]: I0711 00:16:30.086803 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5879\" (UniqueName: \"kubernetes.io/projected/52892df7-6e82-4fd1-8c85-d93129166596-kube-api-access-n5879\") pod \"coredns-674b8bbfcf-mmnh8\" (UID: \"52892df7-6e82-4fd1-8c85-d93129166596\") " pod="kube-system/coredns-674b8bbfcf-mmnh8" Jul 11 00:16:30.086914 kubelet[2469]: I0711 00:16:30.086850 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be191a89-0427-4456-b3ac-3a29c67d84d3-whisker-backend-key-pair\") pod \"whisker-7d7596b9b4-8vr76\" (UID: \"be191a89-0427-4456-b3ac-3a29c67d84d3\") " pod="calico-system/whisker-7d7596b9b4-8vr76" Jul 11 00:16:30.086914 kubelet[2469]: I0711 00:16:30.086896 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8e35104-a845-48f9-8ad5-cc498d1edd3f-calico-apiserver-certs\") pod \"calico-apiserver-77dc4685dc-gc67k\" (UID: \"b8e35104-a845-48f9-8ad5-cc498d1edd3f\") " pod="calico-apiserver/calico-apiserver-77dc4685dc-gc67k" Jul 11 00:16:30.086971 kubelet[2469]: I0711 00:16:30.086914 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g65r\" (UniqueName: \"kubernetes.io/projected/b8e35104-a845-48f9-8ad5-cc498d1edd3f-kube-api-access-7g65r\") pod \"calico-apiserver-77dc4685dc-gc67k\" (UID: \"b8e35104-a845-48f9-8ad5-cc498d1edd3f\") " pod="calico-apiserver/calico-apiserver-77dc4685dc-gc67k" Jul 11 00:16:30.086971 kubelet[2469]: I0711 00:16:30.086958 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjq5\" (UniqueName: \"kubernetes.io/projected/e396f44e-1f83-4d6d-a81a-6662c419d2df-kube-api-access-lkjq5\") pod \"calico-apiserver-6f9544685f-msjqm\" (UID: \"e396f44e-1f83-4d6d-a81a-6662c419d2df\") " pod="calico-apiserver/calico-apiserver-6f9544685f-msjqm" Jul 11 00:16:30.087060 kubelet[2469]: I0711 00:16:30.086976 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be191a89-0427-4456-b3ac-3a29c67d84d3-whisker-ca-bundle\") pod \"whisker-7d7596b9b4-8vr76\" (UID: \"be191a89-0427-4456-b3ac-3a29c67d84d3\") " pod="calico-system/whisker-7d7596b9b4-8vr76" Jul 11 00:16:30.087060 kubelet[2469]: I0711 00:16:30.086993 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52892df7-6e82-4fd1-8c85-d93129166596-config-volume\") pod \"coredns-674b8bbfcf-mmnh8\" (UID: \"52892df7-6e82-4fd1-8c85-d93129166596\") " pod="kube-system/coredns-674b8bbfcf-mmnh8" Jul 11 00:16:30.136423 containerd[1445]: time="2025-07-11T00:16:30.136370787Z" level=error msg="Failed to destroy network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.137024 containerd[1445]: time="2025-07-11T00:16:30.136819435Z" level=error msg="encountered an error cleaning up failed sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.137024 containerd[1445]: time="2025-07-11T00:16:30.136907972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v292t,Uid:9fab7f82-393a-41e4-a999-9430044f6a22,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.137383 kubelet[2469]: E0711 00:16:30.137333 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.137443 kubelet[2469]: E0711 00:16:30.137403 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v292t" Jul 11 00:16:30.137443 kubelet[2469]: E0711 00:16:30.137425 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v292t" Jul 11 00:16:30.137497 kubelet[2469]: E0711 00:16:30.137475 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v292t_calico-system(9fab7f82-393a-41e4-a999-9430044f6a22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v292t_calico-system(9fab7f82-393a-41e4-a999-9430044f6a22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v292t" podUID="9fab7f82-393a-41e4-a999-9430044f6a22" Jul 11 00:16:30.161986 containerd[1445]: time="2025-07-11T00:16:30.161932765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9544685f-czxjh,Uid:f6ab768e-fd1a-4783-bee2-e85ef4dae0dc,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:16:30.248917 containerd[1445]: time="2025-07-11T00:16:30.248751369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lf8r5,Uid:58c7686e-2053-4b0c-9e02-052b8ed1eb7b,Namespace:calico-system,Attempt:0,}" Jul 11 00:16:30.253796 containerd[1445]: time="2025-07-11T00:16:30.253679297Z" level=error msg="Failed to destroy network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.254137 containerd[1445]: time="2025-07-11T00:16:30.254107941Z" level=error msg="encountered an error cleaning up failed sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.254334 containerd[1445]: time="2025-07-11T00:16:30.254235366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9544685f-czxjh,Uid:f6ab768e-fd1a-4783-bee2-e85ef4dae0dc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.254506 kubelet[2469]: E0711 00:16:30.254450 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.254581 kubelet[2469]: E0711 00:16:30.254512 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9544685f-czxjh" Jul 11 00:16:30.254581 kubelet[2469]: E0711 00:16:30.254537 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9544685f-czxjh" Jul 11 00:16:30.254635 kubelet[2469]: E0711 00:16:30.254587 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f9544685f-czxjh_calico-apiserver(f6ab768e-fd1a-4783-bee2-e85ef4dae0dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f9544685f-czxjh_calico-apiserver(f6ab768e-fd1a-4783-bee2-e85ef4dae0dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9544685f-czxjh" podUID="f6ab768e-fd1a-4783-bee2-e85ef4dae0dc" Jul 11 00:16:30.262540 containerd[1445]: time="2025-07-11T00:16:30.262484386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5d7d7856-pzcbk,Uid:a38583e9-7b43-4c77-8995-9dc39bf3123b,Namespace:calico-system,Attempt:0,}" Jul 11 00:16:30.285604 containerd[1445]: time="2025-07-11T00:16:30.285491502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9544685f-msjqm,Uid:e396f44e-1f83-4d6d-a81a-6662c419d2df,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:16:30.303800 kubelet[2469]: E0711 00:16:30.303525 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:30.304041 containerd[1445]: time="2025-07-11T00:16:30.303982893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxqzj,Uid:67c0bfe2-c5c5-48ac-a593-483d9d147ed4,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:30.311491 containerd[1445]: time="2025-07-11T00:16:30.311444037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d7596b9b4-8vr76,Uid:be191a89-0427-4456-b3ac-3a29c67d84d3,Namespace:calico-system,Attempt:0,}" Jul 11 00:16:30.324102 kubelet[2469]: E0711 00:16:30.324061 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:30.325168 containerd[1445]: time="2025-07-11T00:16:30.325131565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mmnh8,Uid:52892df7-6e82-4fd1-8c85-d93129166596,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:30.495814 containerd[1445]: time="2025-07-11T00:16:30.495765184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77dc4685dc-gc67k,Uid:b8e35104-a845-48f9-8ad5-cc498d1edd3f,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:16:30.526152 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266-shm.mount: Deactivated successfully. Jul 11 00:16:30.569517 containerd[1445]: time="2025-07-11T00:16:30.569410282Z" level=error msg="Failed to destroy network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.571472 containerd[1445]: time="2025-07-11T00:16:30.571427118Z" level=error msg="encountered an error cleaning up failed sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.571541 containerd[1445]: time="2025-07-11T00:16:30.571487530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lf8r5,Uid:58c7686e-2053-4b0c-9e02-052b8ed1eb7b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.571761 kubelet[2469]: E0711 00:16:30.571714 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.572048 kubelet[2469]: E0711 00:16:30.571771 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lf8r5" Jul 11 00:16:30.572048 kubelet[2469]: E0711 00:16:30.571790 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lf8r5" Jul 11 00:16:30.572048 kubelet[2469]: E0711 00:16:30.571832 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-lf8r5_calico-system(58c7686e-2053-4b0c-9e02-052b8ed1eb7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-lf8r5_calico-system(58c7686e-2053-4b0c-9e02-052b8ed1eb7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lf8r5" podUID="58c7686e-2053-4b0c-9e02-052b8ed1eb7b" Jul 11 00:16:30.572227 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26-shm.mount: Deactivated successfully. Jul 11 00:16:30.968087 kubelet[2469]: I0711 00:16:30.967698 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:16:30.974435 kubelet[2469]: I0711 00:16:30.972954 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:16:30.974597 containerd[1445]: time="2025-07-11T00:16:30.973443763Z" level=info msg="StopPodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\"" Jul 11 00:16:30.974597 containerd[1445]: time="2025-07-11T00:16:30.973541663Z" level=info msg="StopPodSandbox for \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\"" Jul 11 00:16:30.974597 containerd[1445]: time="2025-07-11T00:16:30.974060444Z" level=info msg="Ensure that sandbox 4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847 in task-service has been cleanup successfully" Jul 11 00:16:30.974597 containerd[1445]: time="2025-07-11T00:16:30.974069846Z" level=info msg="Ensure that sandbox a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266 in task-service has been cleanup successfully" Jul 11 00:16:30.975233 kubelet[2469]: I0711 00:16:30.974946 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:16:30.975499 containerd[1445]: time="2025-07-11T00:16:30.975433874Z" level=info msg="StopPodSandbox for \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\"" Jul 11 00:16:30.977054 containerd[1445]: time="2025-07-11T00:16:30.977017945Z" level=info msg="Ensure that sandbox 8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26 in task-service has been cleanup successfully" Jul 11 00:16:30.983519 containerd[1445]: time="2025-07-11T00:16:30.983476853Z" level=error msg="Failed to destroy network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.983927 containerd[1445]: time="2025-07-11T00:16:30.983901056Z" level=error msg="encountered an error cleaning up failed sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.983981 containerd[1445]: time="2025-07-11T00:16:30.983956307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5d7d7856-pzcbk,Uid:a38583e9-7b43-4c77-8995-9dc39bf3123b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.984187 kubelet[2469]: E0711 00:16:30.984137 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:30.984282 kubelet[2469]: E0711 00:16:30.984186 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f5d7d7856-pzcbk" Jul 11 00:16:30.984282 kubelet[2469]: E0711 00:16:30.984206 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f5d7d7856-pzcbk" Jul 11 00:16:30.984282 kubelet[2469]: E0711 00:16:30.984241 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f5d7d7856-pzcbk_calico-system(a38583e9-7b43-4c77-8995-9dc39bf3123b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f5d7d7856-pzcbk_calico-system(a38583e9-7b43-4c77-8995-9dc39bf3123b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f5d7d7856-pzcbk" podUID="a38583e9-7b43-4c77-8995-9dc39bf3123b" Jul 11 00:16:31.014318 containerd[1445]: time="2025-07-11T00:16:31.014053797Z" level=error msg="Failed to destroy network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.015748 containerd[1445]: time="2025-07-11T00:16:31.015699107Z" level=error msg="encountered an error cleaning up failed sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.015822 containerd[1445]: time="2025-07-11T00:16:31.015756038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9544685f-msjqm,Uid:e396f44e-1f83-4d6d-a81a-6662c419d2df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.017032 kubelet[2469]: E0711 00:16:31.016037 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.017032 kubelet[2469]: E0711 00:16:31.016095 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9544685f-msjqm" Jul 11 00:16:31.017032 kubelet[2469]: E0711 00:16:31.016114 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9544685f-msjqm" Jul 11 00:16:31.017182 kubelet[2469]: E0711 00:16:31.016173 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f9544685f-msjqm_calico-apiserver(e396f44e-1f83-4d6d-a81a-6662c419d2df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f9544685f-msjqm_calico-apiserver(e396f44e-1f83-4d6d-a81a-6662c419d2df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9544685f-msjqm" podUID="e396f44e-1f83-4d6d-a81a-6662c419d2df" Jul 11 00:16:31.029085 containerd[1445]: time="2025-07-11T00:16:31.029039586Z" level=error msg="StopPodSandbox for \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\" failed" error="failed to destroy network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.029202 containerd[1445]: time="2025-07-11T00:16:31.029056069Z" level=error msg="Failed to destroy network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.029321 kubelet[2469]: E0711 00:16:31.029281 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:16:31.029378 kubelet[2469]: E0711 00:16:31.029342 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26"} Jul 11 00:16:31.029411 kubelet[2469]: E0711 00:16:31.029395 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58c7686e-2053-4b0c-9e02-052b8ed1eb7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:31.030105 containerd[1445]: time="2025-07-11T00:16:31.030069020Z" level=error msg="Failed to destroy network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.030863 containerd[1445]: time="2025-07-11T00:16:31.030807440Z" level=error msg="encountered an error cleaning up failed sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.030943 containerd[1445]: time="2025-07-11T00:16:31.030893296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d7596b9b4-8vr76,Uid:be191a89-0427-4456-b3ac-3a29c67d84d3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.031195 containerd[1445]: time="2025-07-11T00:16:31.031067009Z" level=error msg="encountered an error cleaning up failed sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.031195 containerd[1445]: time="2025-07-11T00:16:31.031109777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxqzj,Uid:67c0bfe2-c5c5-48ac-a593-483d9d147ed4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.031324 kubelet[2469]: E0711 00:16:31.031271 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.031369 kubelet[2469]: E0711 00:16:31.031321 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fxqzj" Jul 11 00:16:31.031369 kubelet[2469]: E0711 00:16:31.031340 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fxqzj" Jul 11 00:16:31.031459 kubelet[2469]: E0711 00:16:31.031415 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fxqzj_kube-system(67c0bfe2-c5c5-48ac-a593-483d9d147ed4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fxqzj_kube-system(67c0bfe2-c5c5-48ac-a593-483d9d147ed4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fxqzj" podUID="67c0bfe2-c5c5-48ac-a593-483d9d147ed4" Jul 11 00:16:31.033086 kubelet[2469]: E0711 00:16:31.031060 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.033319 kubelet[2469]: E0711 00:16:31.033210 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d7596b9b4-8vr76" Jul 11 00:16:31.033319 kubelet[2469]: E0711 00:16:31.033234 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d7596b9b4-8vr76" Jul 11 00:16:31.033319 kubelet[2469]: E0711 00:16:31.033283 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d7596b9b4-8vr76_calico-system(be191a89-0427-4456-b3ac-3a29c67d84d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d7596b9b4-8vr76_calico-system(be191a89-0427-4456-b3ac-3a29c67d84d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d7596b9b4-8vr76" podUID="be191a89-0427-4456-b3ac-3a29c67d84d3" Jul 11 00:16:31.033576 kubelet[2469]: E0711 00:16:31.029459 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58c7686e-2053-4b0c-9e02-052b8ed1eb7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lf8r5" podUID="58c7686e-2053-4b0c-9e02-052b8ed1eb7b" Jul 11 00:16:31.052956 containerd[1445]: time="2025-07-11T00:16:31.052894289Z" level=error msg="StopPodSandbox for \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\" failed" error="failed to destroy network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.053177 kubelet[2469]: E0711 00:16:31.053111 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:16:31.053220 kubelet[2469]: E0711 00:16:31.053179 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266"} Jul 11 00:16:31.053220 kubelet[2469]: E0711 00:16:31.053211 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fab7f82-393a-41e4-a999-9430044f6a22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:31.053295 kubelet[2469]: E0711 00:16:31.053233 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fab7f82-393a-41e4-a999-9430044f6a22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v292t" podUID="9fab7f82-393a-41e4-a999-9430044f6a22" Jul 11 00:16:31.054361 containerd[1445]: time="2025-07-11T00:16:31.054307676Z" level=error msg="Failed to destroy network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.054753 containerd[1445]: time="2025-07-11T00:16:31.054625496Z" level=error msg="encountered an error cleaning up failed sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.054753 containerd[1445]: time="2025-07-11T00:16:31.054676946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mmnh8,Uid:52892df7-6e82-4fd1-8c85-d93129166596,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.056030 kubelet[2469]: E0711 00:16:31.054868 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.056075 kubelet[2469]: E0711 00:16:31.056041 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mmnh8" Jul 11 00:16:31.056075 kubelet[2469]: E0711 00:16:31.056063 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mmnh8" Jul 11 00:16:31.056194 kubelet[2469]: E0711 00:16:31.056110 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mmnh8_kube-system(52892df7-6e82-4fd1-8c85-d93129166596)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mmnh8_kube-system(52892df7-6e82-4fd1-8c85-d93129166596)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mmnh8" podUID="52892df7-6e82-4fd1-8c85-d93129166596" Jul 11 00:16:31.058220 containerd[1445]: time="2025-07-11T00:16:31.058178247Z" level=error msg="StopPodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" failed" error="failed to destroy network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.058353 kubelet[2469]: E0711 00:16:31.058317 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:16:31.058394 kubelet[2469]: E0711 00:16:31.058353 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847"} Jul 11 00:16:31.058394 kubelet[2469]: E0711 00:16:31.058380 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:31.058456 kubelet[2469]: E0711 00:16:31.058397 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9544685f-czxjh" podUID="f6ab768e-fd1a-4783-bee2-e85ef4dae0dc" Jul 11 00:16:31.065452 containerd[1445]: time="2025-07-11T00:16:31.065409012Z" level=error msg="Failed to destroy network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.065756 containerd[1445]: time="2025-07-11T00:16:31.065713429Z" level=error msg="encountered an error cleaning up failed sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.065796 containerd[1445]: time="2025-07-11T00:16:31.065760838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77dc4685dc-gc67k,Uid:b8e35104-a845-48f9-8ad5-cc498d1edd3f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.065952 kubelet[2469]: E0711 00:16:31.065924 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:31.066001 kubelet[2469]: E0711 00:16:31.065964 2469 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77dc4685dc-gc67k" Jul 11 00:16:31.066001 kubelet[2469]: E0711 00:16:31.065983 2469 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77dc4685dc-gc67k" Jul 11 00:16:31.066056 kubelet[2469]: E0711 00:16:31.066028 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77dc4685dc-gc67k_calico-apiserver(b8e35104-a845-48f9-8ad5-cc498d1edd3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77dc4685dc-gc67k_calico-apiserver(b8e35104-a845-48f9-8ad5-cc498d1edd3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77dc4685dc-gc67k" podUID="b8e35104-a845-48f9-8ad5-cc498d1edd3f" Jul 11 00:16:31.516558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd-shm.mount: Deactivated successfully. Jul 11 00:16:31.516640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452-shm.mount: Deactivated successfully. Jul 11 00:16:31.516689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed-shm.mount: Deactivated successfully. Jul 11 00:16:31.978226 kubelet[2469]: I0711 00:16:31.978110 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:16:31.979520 containerd[1445]: time="2025-07-11T00:16:31.979106745Z" level=info msg="StopPodSandbox for \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\"" Jul 11 00:16:31.979520 containerd[1445]: time="2025-07-11T00:16:31.979269896Z" level=info msg="Ensure that sandbox 8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e in task-service has been cleanup successfully" Jul 11 00:16:31.984181 kubelet[2469]: I0711 00:16:31.983823 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:16:31.987517 kubelet[2469]: I0711 00:16:31.987481 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:16:31.988809 containerd[1445]: time="2025-07-11T00:16:31.988122327Z" level=info msg="StopPodSandbox for \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\"" Jul 11 00:16:31.988809 containerd[1445]: time="2025-07-11T00:16:31.988178217Z" level=info msg="StopPodSandbox for \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\"" Jul 11 00:16:31.988809 containerd[1445]: time="2025-07-11T00:16:31.988379895Z" level=info msg="Ensure that sandbox f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452 in task-service has been cleanup successfully" Jul 11 00:16:31.988979 containerd[1445]: time="2025-07-11T00:16:31.988834981Z" level=info msg="Ensure that sandbox 91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd in task-service has been cleanup successfully" Jul 11 00:16:31.993524 kubelet[2469]: I0711 00:16:31.993117 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:16:31.995420 containerd[1445]: time="2025-07-11T00:16:31.994557942Z" level=info msg="StopPodSandbox for \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\"" Jul 11 00:16:31.995605 containerd[1445]: time="2025-07-11T00:16:31.995523444Z" level=info msg="Ensure that sandbox 7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df in task-service has been cleanup successfully" Jul 11 00:16:31.995990 kubelet[2469]: I0711 00:16:31.995962 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:16:31.996710 containerd[1445]: time="2025-07-11T00:16:31.996670621Z" level=info msg="StopPodSandbox for \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\"" Jul 11 00:16:31.996998 containerd[1445]: time="2025-07-11T00:16:31.996975118Z" level=info msg="Ensure that sandbox 224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed in task-service has been cleanup successfully" Jul 11 00:16:32.003090 kubelet[2469]: I0711 00:16:32.002816 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:16:32.004937 containerd[1445]: time="2025-07-11T00:16:32.004730635Z" level=info msg="StopPodSandbox for \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\"" Jul 11 00:16:32.005153 containerd[1445]: time="2025-07-11T00:16:32.005108904Z" level=info msg="Ensure that sandbox af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828 in task-service has been cleanup successfully" Jul 11 00:16:32.044235 containerd[1445]: time="2025-07-11T00:16:32.044178363Z" level=error msg="StopPodSandbox for \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\" failed" error="failed to destroy network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:32.044509 kubelet[2469]: E0711 00:16:32.044416 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:16:32.044509 kubelet[2469]: E0711 00:16:32.044468 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e"} Jul 11 00:16:32.044509 kubelet[2469]: E0711 00:16:32.044500 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52892df7-6e82-4fd1-8c85-d93129166596\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:32.044650 kubelet[2469]: E0711 00:16:32.044527 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52892df7-6e82-4fd1-8c85-d93129166596\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mmnh8" podUID="52892df7-6e82-4fd1-8c85-d93129166596" Jul 11 00:16:32.064434 containerd[1445]: time="2025-07-11T00:16:32.064369352Z" level=error msg="StopPodSandbox for \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\" failed" error="failed to destroy network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:32.064880 kubelet[2469]: E0711 00:16:32.064738 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:16:32.064972 kubelet[2469]: E0711 00:16:32.064894 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df"} Jul 11 00:16:32.064972 kubelet[2469]: E0711 00:16:32.064929 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67c0bfe2-c5c5-48ac-a593-483d9d147ed4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:32.064972 kubelet[2469]: E0711 00:16:32.064950 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67c0bfe2-c5c5-48ac-a593-483d9d147ed4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fxqzj" podUID="67c0bfe2-c5c5-48ac-a593-483d9d147ed4" Jul 11 00:16:32.067586 containerd[1445]: time="2025-07-11T00:16:32.067548410Z" level=error msg="StopPodSandbox for \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\" failed" error="failed to destroy network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:32.068034 kubelet[2469]: E0711 00:16:32.067856 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:16:32.068034 kubelet[2469]: E0711 00:16:32.067949 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828"} Jul 11 00:16:32.068034 kubelet[2469]: E0711 00:16:32.067982 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8e35104-a845-48f9-8ad5-cc498d1edd3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:32.068034 kubelet[2469]: E0711 00:16:32.068003 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8e35104-a845-48f9-8ad5-cc498d1edd3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77dc4685dc-gc67k" podUID="b8e35104-a845-48f9-8ad5-cc498d1edd3f" Jul 11 00:16:32.074692 containerd[1445]: time="2025-07-11T00:16:32.071171068Z" level=error msg="StopPodSandbox for \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\" failed" error="failed to destroy network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:32.074787 containerd[1445]: time="2025-07-11T00:16:32.071278728Z" level=error msg="StopPodSandbox for \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\" failed" error="failed to destroy network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:32.074787 containerd[1445]: time="2025-07-11T00:16:32.071988897Z" level=error msg="StopPodSandbox for \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\" failed" error="failed to destroy network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:16:32.075277 kubelet[2469]: E0711 00:16:32.075000 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:16:32.075277 kubelet[2469]: E0711 00:16:32.075062 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452"} Jul 11 00:16:32.075277 kubelet[2469]: E0711 00:16:32.075100 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a38583e9-7b43-4c77-8995-9dc39bf3123b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:32.075277 kubelet[2469]: E0711 00:16:32.075150 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a38583e9-7b43-4c77-8995-9dc39bf3123b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f5d7d7856-pzcbk" podUID="a38583e9-7b43-4c77-8995-9dc39bf3123b" Jul 11 00:16:32.075499 kubelet[2469]: E0711 00:16:32.075189 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:16:32.075499 kubelet[2469]: E0711 00:16:32.075205 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed"} Jul 11 00:16:32.075499 kubelet[2469]: E0711 00:16:32.075223 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e396f44e-1f83-4d6d-a81a-6662c419d2df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:32.075499 kubelet[2469]: E0711 00:16:32.075250 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e396f44e-1f83-4d6d-a81a-6662c419d2df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9544685f-msjqm" podUID="e396f44e-1f83-4d6d-a81a-6662c419d2df" Jul 11 00:16:32.075640 kubelet[2469]: E0711 00:16:32.075278 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:16:32.075640 kubelet[2469]: E0711 00:16:32.075311 2469 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd"} Jul 11 00:16:32.075640 kubelet[2469]: E0711 00:16:32.075346 2469 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be191a89-0427-4456-b3ac-3a29c67d84d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:16:32.075640 kubelet[2469]: E0711 00:16:32.075382 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be191a89-0427-4456-b3ac-3a29c67d84d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d7596b9b4-8vr76" podUID="be191a89-0427-4456-b3ac-3a29c67d84d3" Jul 11 00:16:35.014563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611310529.mount: Deactivated successfully. Jul 11 00:16:35.268798 containerd[1445]: time="2025-07-11T00:16:35.268672896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:35.270008 containerd[1445]: time="2025-07-11T00:16:35.269968667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 11 00:16:35.270970 containerd[1445]: time="2025-07-11T00:16:35.270934224Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:35.300933 containerd[1445]: time="2025-07-11T00:16:35.300858144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:35.301467 containerd[1445]: time="2025-07-11T00:16:35.301420236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 5.333309226s" Jul 11 00:16:35.301467 containerd[1445]: time="2025-07-11T00:16:35.301463163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 11 00:16:35.314812 containerd[1445]: time="2025-07-11T00:16:35.314761331Z" level=info msg="CreateContainer within sandbox \"76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:16:35.353035 containerd[1445]: time="2025-07-11T00:16:35.352977842Z" level=info msg="CreateContainer within sandbox \"76616ace514b5ecbd65aeca040fb910d70646b7928cbc9061229877a70598270\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"322356ffabbe85d22d67e4467b25f4f70b26af5b9c696c304d3c8ec10a91fa09\"" Jul 11 00:16:35.353752 containerd[1445]: time="2025-07-11T00:16:35.353490366Z" level=info msg="StartContainer for \"322356ffabbe85d22d67e4467b25f4f70b26af5b9c696c304d3c8ec10a91fa09\"" Jul 11 00:16:35.422084 systemd[1]: Started cri-containerd-322356ffabbe85d22d67e4467b25f4f70b26af5b9c696c304d3c8ec10a91fa09.scope - libcontainer container 322356ffabbe85d22d67e4467b25f4f70b26af5b9c696c304d3c8ec10a91fa09. Jul 11 00:16:35.455288 containerd[1445]: time="2025-07-11T00:16:35.455237397Z" level=info msg="StartContainer for \"322356ffabbe85d22d67e4467b25f4f70b26af5b9c696c304d3c8ec10a91fa09\" returns successfully" Jul 11 00:16:35.673334 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:16:35.673447 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:16:35.791765 containerd[1445]: time="2025-07-11T00:16:35.791717343Z" level=info msg="StopPodSandbox for \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\"" Jul 11 00:16:36.040166 kubelet[2469]: I0711 00:16:36.040102 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6qcgz" podStartSLOduration=1.6690505 podStartE2EDuration="15.040086268s" podCreationTimestamp="2025-07-11 00:16:21 +0000 UTC" firstStartedPulling="2025-07-11 00:16:21.931070939 +0000 UTC m=+21.189355962" lastFinishedPulling="2025-07-11 00:16:35.302106707 +0000 UTC m=+34.560391730" observedRunningTime="2025-07-11 00:16:36.039605272 +0000 UTC m=+35.297890255" watchObservedRunningTime="2025-07-11 00:16:36.040086268 +0000 UTC m=+35.298371291" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:35.917 [INFO][3823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:35.919 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" iface="eth0" netns="/var/run/netns/cni-1ea0a843-961d-ea74-742f-35d6ca649a37" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:35.919 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" iface="eth0" netns="/var/run/netns/cni-1ea0a843-961d-ea74-742f-35d6ca649a37" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:35.922 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" iface="eth0" netns="/var/run/netns/cni-1ea0a843-961d-ea74-742f-35d6ca649a37" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:35.922 [INFO][3823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:35.922 [INFO][3823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:36.093 [INFO][3834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:36.093 [INFO][3834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:36.093 [INFO][3834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:36.107 [WARNING][3834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:36.107 [INFO][3834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:36.108 [INFO][3834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:36.114146 containerd[1445]: 2025-07-11 00:16:36.112 [INFO][3823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:16:36.114703 containerd[1445]: time="2025-07-11T00:16:36.114228793Z" level=info msg="TearDown network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\" successfully" Jul 11 00:16:36.114703 containerd[1445]: time="2025-07-11T00:16:36.114253117Z" level=info msg="StopPodSandbox for \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\" returns successfully" Jul 11 00:16:36.116068 systemd[1]: run-netns-cni\x2d1ea0a843\x2d961d\x2dea74\x2d742f\x2d35d6ca649a37.mount: Deactivated successfully. Jul 11 00:16:36.140795 kubelet[2469]: I0711 00:16:36.140426 2469 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skqjd\" (UniqueName: \"kubernetes.io/projected/be191a89-0427-4456-b3ac-3a29c67d84d3-kube-api-access-skqjd\") pod \"be191a89-0427-4456-b3ac-3a29c67d84d3\" (UID: \"be191a89-0427-4456-b3ac-3a29c67d84d3\") " Jul 11 00:16:36.140795 kubelet[2469]: I0711 00:16:36.140477 2469 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be191a89-0427-4456-b3ac-3a29c67d84d3-whisker-backend-key-pair\") pod \"be191a89-0427-4456-b3ac-3a29c67d84d3\" (UID: \"be191a89-0427-4456-b3ac-3a29c67d84d3\") " Jul 11 00:16:36.140795 kubelet[2469]: I0711 00:16:36.140500 2469 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be191a89-0427-4456-b3ac-3a29c67d84d3-whisker-ca-bundle\") pod \"be191a89-0427-4456-b3ac-3a29c67d84d3\" (UID: \"be191a89-0427-4456-b3ac-3a29c67d84d3\") " Jul 11 00:16:36.145561 systemd[1]: var-lib-kubelet-pods-be191a89\x2d0427\x2d4456\x2db3ac\x2d3a29c67d84d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskqjd.mount: Deactivated successfully. Jul 11 00:16:36.145660 systemd[1]: var-lib-kubelet-pods-be191a89\x2d0427\x2d4456\x2db3ac\x2d3a29c67d84d3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:16:36.148980 kubelet[2469]: I0711 00:16:36.148770 2469 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be191a89-0427-4456-b3ac-3a29c67d84d3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "be191a89-0427-4456-b3ac-3a29c67d84d3" (UID: "be191a89-0427-4456-b3ac-3a29c67d84d3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:16:36.148980 kubelet[2469]: I0711 00:16:36.148853 2469 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be191a89-0427-4456-b3ac-3a29c67d84d3-kube-api-access-skqjd" (OuterVolumeSpecName: "kube-api-access-skqjd") pod "be191a89-0427-4456-b3ac-3a29c67d84d3" (UID: "be191a89-0427-4456-b3ac-3a29c67d84d3"). InnerVolumeSpecName "kube-api-access-skqjd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:16:36.153053 kubelet[2469]: I0711 00:16:36.152934 2469 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be191a89-0427-4456-b3ac-3a29c67d84d3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "be191a89-0427-4456-b3ac-3a29c67d84d3" (UID: "be191a89-0427-4456-b3ac-3a29c67d84d3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:16:36.241425 kubelet[2469]: I0711 00:16:36.241364 2469 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-skqjd\" (UniqueName: \"kubernetes.io/projected/be191a89-0427-4456-b3ac-3a29c67d84d3-kube-api-access-skqjd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:16:36.241425 kubelet[2469]: I0711 00:16:36.241396 2469 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be191a89-0427-4456-b3ac-3a29c67d84d3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:16:36.241425 kubelet[2469]: I0711 00:16:36.241406 2469 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be191a89-0427-4456-b3ac-3a29c67d84d3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:16:36.853599 systemd[1]: Removed slice kubepods-besteffort-podbe191a89_0427_4456_b3ac_3a29c67d84d3.slice - libcontainer container kubepods-besteffort-podbe191a89_0427_4456_b3ac_3a29c67d84d3.slice. Jul 11 00:16:37.079284 systemd[1]: Created slice kubepods-besteffort-pod02f2a23b_98eb_4fa0_b5ff_cb58ff23a651.slice - libcontainer container kubepods-besteffort-pod02f2a23b_98eb_4fa0_b5ff_cb58ff23a651.slice. Jul 11 00:16:37.147047 kubelet[2469]: I0711 00:16:37.146940 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02f2a23b-98eb-4fa0-b5ff-cb58ff23a651-whisker-ca-bundle\") pod \"whisker-655c689f84-tkr2b\" (UID: \"02f2a23b-98eb-4fa0-b5ff-cb58ff23a651\") " pod="calico-system/whisker-655c689f84-tkr2b" Jul 11 00:16:37.147047 kubelet[2469]: I0711 00:16:37.146989 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrf44\" (UniqueName: \"kubernetes.io/projected/02f2a23b-98eb-4fa0-b5ff-cb58ff23a651-kube-api-access-jrf44\") pod \"whisker-655c689f84-tkr2b\" (UID: \"02f2a23b-98eb-4fa0-b5ff-cb58ff23a651\") " pod="calico-system/whisker-655c689f84-tkr2b" Jul 11 00:16:37.147047 kubelet[2469]: I0711 00:16:37.147012 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/02f2a23b-98eb-4fa0-b5ff-cb58ff23a651-whisker-backend-key-pair\") pod \"whisker-655c689f84-tkr2b\" (UID: \"02f2a23b-98eb-4fa0-b5ff-cb58ff23a651\") " pod="calico-system/whisker-655c689f84-tkr2b" Jul 11 00:16:37.383599 containerd[1445]: time="2025-07-11T00:16:37.383545823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-655c689f84-tkr2b,Uid:02f2a23b-98eb-4fa0-b5ff-cb58ff23a651,Namespace:calico-system,Attempt:0,}" Jul 11 00:16:37.552269 systemd-networkd[1376]: cali233f6133732: Link UP Jul 11 00:16:37.552583 systemd-networkd[1376]: cali233f6133732: Gained carrier Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.462 [INFO][4004] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.475 [INFO][4004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--655c689f84--tkr2b-eth0 whisker-655c689f84- calico-system 02f2a23b-98eb-4fa0-b5ff-cb58ff23a651 925 0 2025-07-11 00:16:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:655c689f84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-655c689f84-tkr2b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali233f6133732 [] [] }} ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Namespace="calico-system" Pod="whisker-655c689f84-tkr2b" WorkloadEndpoint="localhost-k8s-whisker--655c689f84--tkr2b-" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.475 [INFO][4004] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Namespace="calico-system" Pod="whisker-655c689f84-tkr2b" WorkloadEndpoint="localhost-k8s-whisker--655c689f84--tkr2b-eth0" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.498 [INFO][4017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" HandleID="k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Workload="localhost-k8s-whisker--655c689f84--tkr2b-eth0" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.498 [INFO][4017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" HandleID="k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Workload="localhost-k8s-whisker--655c689f84--tkr2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-655c689f84-tkr2b", "timestamp":"2025-07-11 00:16:37.498073447 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.498 [INFO][4017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.498 [INFO][4017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.498 [INFO][4017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.509 [INFO][4017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.525 [INFO][4017] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.529 [INFO][4017] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.531 [INFO][4017] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.532 [INFO][4017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.532 [INFO][4017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.533 [INFO][4017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.537 [INFO][4017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.541 [INFO][4017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.541 [INFO][4017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" host="localhost" Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.541 [INFO][4017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:37.564259 containerd[1445]: 2025-07-11 00:16:37.541 [INFO][4017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" HandleID="k8s-pod-network.207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Workload="localhost-k8s-whisker--655c689f84--tkr2b-eth0" Jul 11 00:16:37.564821 containerd[1445]: 2025-07-11 00:16:37.543 [INFO][4004] cni-plugin/k8s.go 418: Populated endpoint ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Namespace="calico-system" Pod="whisker-655c689f84-tkr2b" WorkloadEndpoint="localhost-k8s-whisker--655c689f84--tkr2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--655c689f84--tkr2b-eth0", GenerateName:"whisker-655c689f84-", Namespace:"calico-system", SelfLink:"", UID:"02f2a23b-98eb-4fa0-b5ff-cb58ff23a651", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"655c689f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-655c689f84-tkr2b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali233f6133732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:37.564821 containerd[1445]: 2025-07-11 00:16:37.543 [INFO][4004] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Namespace="calico-system" Pod="whisker-655c689f84-tkr2b" WorkloadEndpoint="localhost-k8s-whisker--655c689f84--tkr2b-eth0" Jul 11 00:16:37.564821 containerd[1445]: 2025-07-11 00:16:37.543 [INFO][4004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali233f6133732 ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Namespace="calico-system" Pod="whisker-655c689f84-tkr2b" WorkloadEndpoint="localhost-k8s-whisker--655c689f84--tkr2b-eth0" Jul 11 00:16:37.564821 containerd[1445]: 2025-07-11 00:16:37.553 [INFO][4004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Namespace="calico-system" Pod="whisker-655c689f84-tkr2b" WorkloadEndpoint="localhost-k8s-whisker--655c689f84--tkr2b-eth0" Jul 11 00:16:37.564821 containerd[1445]: 2025-07-11 00:16:37.553 [INFO][4004] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Namespace="calico-system" Pod="whisker-655c689f84-tkr2b" WorkloadEndpoint="localhost-k8s-whisker--655c689f84--tkr2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--655c689f84--tkr2b-eth0", GenerateName:"whisker-655c689f84-", Namespace:"calico-system", SelfLink:"", UID:"02f2a23b-98eb-4fa0-b5ff-cb58ff23a651", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"655c689f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa", Pod:"whisker-655c689f84-tkr2b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali233f6133732", MAC:"4a:3e:88:80:4a:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:37.564821 containerd[1445]: 2025-07-11 00:16:37.560 [INFO][4004] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa" Namespace="calico-system" Pod="whisker-655c689f84-tkr2b" WorkloadEndpoint="localhost-k8s-whisker--655c689f84--tkr2b-eth0" Jul 11 00:16:37.585918 containerd[1445]: time="2025-07-11T00:16:37.585665523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:37.585918 containerd[1445]: time="2025-07-11T00:16:37.585727692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:37.585918 containerd[1445]: time="2025-07-11T00:16:37.585738294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:37.585918 containerd[1445]: time="2025-07-11T00:16:37.585813905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:37.601026 systemd[1]: Started cri-containerd-207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa.scope - libcontainer container 207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa. Jul 11 00:16:37.611523 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:37.633839 containerd[1445]: time="2025-07-11T00:16:37.633802903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-655c689f84-tkr2b,Uid:02f2a23b-98eb-4fa0-b5ff-cb58ff23a651,Namespace:calico-system,Attempt:0,} returns sandbox id \"207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa\"" Jul 11 00:16:37.644037 containerd[1445]: time="2025-07-11T00:16:37.644000178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:16:38.799376 systemd-networkd[1376]: cali233f6133732: Gained IPv6LL Jul 11 00:16:38.841476 kubelet[2469]: I0711 00:16:38.841426 2469 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be191a89-0427-4456-b3ac-3a29c67d84d3" path="/var/lib/kubelet/pods/be191a89-0427-4456-b3ac-3a29c67d84d3/volumes" Jul 11 00:16:39.187910 containerd[1445]: time="2025-07-11T00:16:39.187753237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:39.189246 containerd[1445]: time="2025-07-11T00:16:39.189211326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 11 00:16:39.191721 containerd[1445]: time="2025-07-11T00:16:39.191678359Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:39.195379 containerd[1445]: time="2025-07-11T00:16:39.195320080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:39.196211 containerd[1445]: time="2025-07-11T00:16:39.196162081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.552125778s" Jul 11 00:16:39.196211 containerd[1445]: time="2025-07-11T00:16:39.196200326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 11 00:16:39.201710 containerd[1445]: time="2025-07-11T00:16:39.201657268Z" level=info msg="CreateContainer within sandbox \"207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:16:39.249404 containerd[1445]: time="2025-07-11T00:16:39.249351217Z" level=info msg="CreateContainer within sandbox \"207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3dc70499421d8c06f5a9da1d2289268492ba3b830a272bf201fbafa28eaad6fc\"" Jul 11 00:16:39.250019 containerd[1445]: time="2025-07-11T00:16:39.249990748Z" level=info msg="StartContainer for \"3dc70499421d8c06f5a9da1d2289268492ba3b830a272bf201fbafa28eaad6fc\"" Jul 11 00:16:39.281085 systemd[1]: Started cri-containerd-3dc70499421d8c06f5a9da1d2289268492ba3b830a272bf201fbafa28eaad6fc.scope - libcontainer container 3dc70499421d8c06f5a9da1d2289268492ba3b830a272bf201fbafa28eaad6fc. Jul 11 00:16:39.311406 containerd[1445]: time="2025-07-11T00:16:39.311364536Z" level=info msg="StartContainer for \"3dc70499421d8c06f5a9da1d2289268492ba3b830a272bf201fbafa28eaad6fc\" returns successfully" Jul 11 00:16:39.319503 containerd[1445]: time="2025-07-11T00:16:39.319459695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:16:41.183979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3546493920.mount: Deactivated successfully. Jul 11 00:16:41.343959 containerd[1445]: time="2025-07-11T00:16:41.343348275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:41.347315 containerd[1445]: time="2025-07-11T00:16:41.346834306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 11 00:16:41.347315 containerd[1445]: time="2025-07-11T00:16:41.347021131Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:41.355458 containerd[1445]: time="2025-07-11T00:16:41.354997048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:41.358243 containerd[1445]: time="2025-07-11T00:16:41.356618667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.037107964s" Jul 11 00:16:41.358243 containerd[1445]: time="2025-07-11T00:16:41.356656232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 11 00:16:41.377021 containerd[1445]: time="2025-07-11T00:16:41.376822355Z" level=info msg="CreateContainer within sandbox \"207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:16:41.410200 containerd[1445]: time="2025-07-11T00:16:41.410141653Z" level=info msg="CreateContainer within sandbox \"207ec33951882385b490347c7a100e6cf59b36b0e22eea029b1c01ea89b91dfa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"158808c691aa28674c3e30dca759a2873e06a5f1b186676ef3951758fe9ac15a\"" Jul 11 00:16:41.413742 containerd[1445]: time="2025-07-11T00:16:41.411977981Z" level=info msg="StartContainer for \"158808c691aa28674c3e30dca759a2873e06a5f1b186676ef3951758fe9ac15a\"" Jul 11 00:16:41.459101 systemd[1]: Started cri-containerd-158808c691aa28674c3e30dca759a2873e06a5f1b186676ef3951758fe9ac15a.scope - libcontainer container 158808c691aa28674c3e30dca759a2873e06a5f1b186676ef3951758fe9ac15a. Jul 11 00:16:41.495543 containerd[1445]: time="2025-07-11T00:16:41.495489937Z" level=info msg="StartContainer for \"158808c691aa28674c3e30dca759a2873e06a5f1b186676ef3951758fe9ac15a\" returns successfully" Jul 11 00:16:42.075933 kubelet[2469]: I0711 00:16:42.075857 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-655c689f84-tkr2b" podStartSLOduration=1.360672138 podStartE2EDuration="5.075830414s" podCreationTimestamp="2025-07-11 00:16:37 +0000 UTC" firstStartedPulling="2025-07-11 00:16:37.643744179 +0000 UTC m=+36.902029162" lastFinishedPulling="2025-07-11 00:16:41.358902415 +0000 UTC m=+40.617187438" observedRunningTime="2025-07-11 00:16:42.075067393 +0000 UTC m=+41.333352416" watchObservedRunningTime="2025-07-11 00:16:42.075830414 +0000 UTC m=+41.334115437" Jul 11 00:16:42.842514 containerd[1445]: time="2025-07-11T00:16:42.842207005Z" level=info msg="StopPodSandbox for \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\"" Jul 11 00:16:42.842514 containerd[1445]: time="2025-07-11T00:16:42.842511245Z" level=info msg="StopPodSandbox for \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\"" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.916 [INFO][4335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.916 [INFO][4335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" iface="eth0" netns="/var/run/netns/cni-80f2efd1-fb76-11db-b4de-dd421d08fbb4" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.916 [INFO][4335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" iface="eth0" netns="/var/run/netns/cni-80f2efd1-fb76-11db-b4de-dd421d08fbb4" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.916 [INFO][4335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" iface="eth0" netns="/var/run/netns/cni-80f2efd1-fb76-11db-b4de-dd421d08fbb4" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.916 [INFO][4335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.916 [INFO][4335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.942 [INFO][4351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.943 [INFO][4351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.943 [INFO][4351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.960 [WARNING][4351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.960 [INFO][4351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.961 [INFO][4351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:42.965252 containerd[1445]: 2025-07-11 00:16:42.963 [INFO][4335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:16:42.967377 systemd[1]: run-netns-cni\x2d80f2efd1\x2dfb76\x2d11db\x2db4de\x2ddd421d08fbb4.mount: Deactivated successfully. Jul 11 00:16:42.969756 containerd[1445]: time="2025-07-11T00:16:42.969707027Z" level=info msg="TearDown network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\" successfully" Jul 11 00:16:42.969756 containerd[1445]: time="2025-07-11T00:16:42.969752313Z" level=info msg="StopPodSandbox for \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\" returns successfully" Jul 11 00:16:42.971081 containerd[1445]: time="2025-07-11T00:16:42.971030360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5d7d7856-pzcbk,Uid:a38583e9-7b43-4c77-8995-9dc39bf3123b,Namespace:calico-system,Attempt:1,}" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.932 [INFO][4334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.932 [INFO][4334] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" iface="eth0" netns="/var/run/netns/cni-c97df960-4e28-7e25-c2c6-ed8baf489d84" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.932 [INFO][4334] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" iface="eth0" netns="/var/run/netns/cni-c97df960-4e28-7e25-c2c6-ed8baf489d84" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.933 [INFO][4334] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" iface="eth0" netns="/var/run/netns/cni-c97df960-4e28-7e25-c2c6-ed8baf489d84" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.933 [INFO][4334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.933 [INFO][4334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.960 [INFO][4358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.961 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.961 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.972 [WARNING][4358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.972 [INFO][4358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.973 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:42.977891 containerd[1445]: 2025-07-11 00:16:42.975 [INFO][4334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:16:42.978295 containerd[1445]: time="2025-07-11T00:16:42.978050402Z" level=info msg="TearDown network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\" successfully" Jul 11 00:16:42.978295 containerd[1445]: time="2025-07-11T00:16:42.978072485Z" level=info msg="StopPodSandbox for \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\" returns successfully" Jul 11 00:16:42.978714 containerd[1445]: time="2025-07-11T00:16:42.978690206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77dc4685dc-gc67k,Uid:b8e35104-a845-48f9-8ad5-cc498d1edd3f,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:16:42.981131 systemd[1]: run-netns-cni\x2dc97df960\x2d4e28\x2d7e25\x2dc2c6\x2ded8baf489d84.mount: Deactivated successfully. Jul 11 00:16:43.256467 systemd-networkd[1376]: cali517df191baf: Link UP Jul 11 00:16:43.256619 systemd-networkd[1376]: cali517df191baf: Gained carrier Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.079 [INFO][4368] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.098 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0 calico-apiserver-77dc4685dc- calico-apiserver b8e35104-a845-48f9-8ad5-cc498d1edd3f 963 0 2025-07-11 00:16:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77dc4685dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77dc4685dc-gc67k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali517df191baf [] [] }} ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Namespace="calico-apiserver" Pod="calico-apiserver-77dc4685dc-gc67k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.098 [INFO][4368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Namespace="calico-apiserver" Pod="calico-apiserver-77dc4685dc-gc67k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.142 [INFO][4396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" HandleID="k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.142 [INFO][4396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" HandleID="k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77dc4685dc-gc67k", "timestamp":"2025-07-11 00:16:43.142737534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.143 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.143 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.143 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.189 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.209 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.220 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.222 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.224 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.224 [INFO][4396] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.227 [INFO][4396] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.236 [INFO][4396] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.249 [INFO][4396] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.249 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" host="localhost" Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.249 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:43.279011 containerd[1445]: 2025-07-11 00:16:43.249 [INFO][4396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" HandleID="k8s-pod-network.d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:43.279829 containerd[1445]: 2025-07-11 00:16:43.255 [INFO][4368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Namespace="calico-apiserver" Pod="calico-apiserver-77dc4685dc-gc67k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0", GenerateName:"calico-apiserver-77dc4685dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8e35104-a845-48f9-8ad5-cc498d1edd3f", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77dc4685dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77dc4685dc-gc67k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali517df191baf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:43.279829 containerd[1445]: 2025-07-11 00:16:43.255 [INFO][4368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Namespace="calico-apiserver" Pod="calico-apiserver-77dc4685dc-gc67k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:43.279829 containerd[1445]: 2025-07-11 00:16:43.255 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali517df191baf ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Namespace="calico-apiserver" Pod="calico-apiserver-77dc4685dc-gc67k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:43.279829 containerd[1445]: 2025-07-11 00:16:43.256 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Namespace="calico-apiserver" Pod="calico-apiserver-77dc4685dc-gc67k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:43.279829 containerd[1445]: 2025-07-11 00:16:43.256 [INFO][4368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Namespace="calico-apiserver" Pod="calico-apiserver-77dc4685dc-gc67k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0", GenerateName:"calico-apiserver-77dc4685dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8e35104-a845-48f9-8ad5-cc498d1edd3f", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77dc4685dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e", Pod:"calico-apiserver-77dc4685dc-gc67k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali517df191baf", MAC:"a2:50:e5:3e:fb:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:43.279829 containerd[1445]: 2025-07-11 00:16:43.271 [INFO][4368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e" Namespace="calico-apiserver" Pod="calico-apiserver-77dc4685dc-gc67k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:16:43.303846 containerd[1445]: time="2025-07-11T00:16:43.303748516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:43.303846 containerd[1445]: time="2025-07-11T00:16:43.303817965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:43.304133 containerd[1445]: time="2025-07-11T00:16:43.304092800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:43.304633 containerd[1445]: time="2025-07-11T00:16:43.304592464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:43.333092 systemd[1]: Started cri-containerd-d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e.scope - libcontainer container d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e. Jul 11 00:16:43.345444 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:43.359370 systemd-networkd[1376]: cali8a8f6ea5fa5: Link UP Jul 11 00:16:43.360666 systemd-networkd[1376]: cali8a8f6ea5fa5: Gained carrier Jul 11 00:16:43.380569 containerd[1445]: time="2025-07-11T00:16:43.380515690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77dc4685dc-gc67k,Uid:b8e35104-a845-48f9-8ad5-cc498d1edd3f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e\"" Jul 11 00:16:43.382434 containerd[1445]: time="2025-07-11T00:16:43.382403851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.081 [INFO][4379] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.106 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0 calico-kube-controllers-5f5d7d7856- calico-system a38583e9-7b43-4c77-8995-9dc39bf3123b 962 0 2025-07-11 00:16:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f5d7d7856 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f5d7d7856-pzcbk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8a8f6ea5fa5 [] [] }} ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Namespace="calico-system" Pod="calico-kube-controllers-5f5d7d7856-pzcbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.107 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Namespace="calico-system" Pod="calico-kube-controllers-5f5d7d7856-pzcbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.166 [INFO][4402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" HandleID="k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.167 [INFO][4402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" HandleID="k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001235a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f5d7d7856-pzcbk", "timestamp":"2025-07-11 00:16:43.166961711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.167 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.250 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.250 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.293 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.310 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.321 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.323 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.328 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.328 [INFO][4402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.330 [INFO][4402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4 Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.338 [INFO][4402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.354 [INFO][4402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.354 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" host="localhost" Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.354 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:43.391908 containerd[1445]: 2025-07-11 00:16:43.354 [INFO][4402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" HandleID="k8s-pod-network.3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:43.393233 containerd[1445]: 2025-07-11 00:16:43.357 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Namespace="calico-system" Pod="calico-kube-controllers-5f5d7d7856-pzcbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0", GenerateName:"calico-kube-controllers-5f5d7d7856-", Namespace:"calico-system", SelfLink:"", UID:"a38583e9-7b43-4c77-8995-9dc39bf3123b", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5d7d7856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f5d7d7856-pzcbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a8f6ea5fa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:43.393233 containerd[1445]: 2025-07-11 00:16:43.357 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Namespace="calico-system" Pod="calico-kube-controllers-5f5d7d7856-pzcbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:43.393233 containerd[1445]: 2025-07-11 00:16:43.357 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a8f6ea5fa5 ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Namespace="calico-system" Pod="calico-kube-controllers-5f5d7d7856-pzcbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:43.393233 containerd[1445]: 2025-07-11 00:16:43.360 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Namespace="calico-system" Pod="calico-kube-controllers-5f5d7d7856-pzcbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:43.393233 containerd[1445]: 2025-07-11 00:16:43.364 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Namespace="calico-system" Pod="calico-kube-controllers-5f5d7d7856-pzcbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0", GenerateName:"calico-kube-controllers-5f5d7d7856-", Namespace:"calico-system", SelfLink:"", UID:"a38583e9-7b43-4c77-8995-9dc39bf3123b", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5d7d7856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4", Pod:"calico-kube-controllers-5f5d7d7856-pzcbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a8f6ea5fa5", MAC:"be:e8:b5:47:5d:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:43.393233 containerd[1445]: 2025-07-11 00:16:43.388 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4" Namespace="calico-system" Pod="calico-kube-controllers-5f5d7d7856-pzcbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:16:43.412625 containerd[1445]: time="2025-07-11T00:16:43.412505579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:43.412625 containerd[1445]: time="2025-07-11T00:16:43.412577388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:43.412801 containerd[1445]: time="2025-07-11T00:16:43.412608112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:43.412801 containerd[1445]: time="2025-07-11T00:16:43.412704205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:43.432055 systemd[1]: Started cri-containerd-3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4.scope - libcontainer container 3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4. Jul 11 00:16:43.447076 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:43.476934 containerd[1445]: time="2025-07-11T00:16:43.476778715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5d7d7856-pzcbk,Uid:a38583e9-7b43-4c77-8995-9dc39bf3123b,Namespace:calico-system,Attempt:1,} returns sandbox id \"3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4\"" Jul 11 00:16:43.641414 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:43166.service - OpenSSH per-connection server daemon (10.0.0.1:43166). Jul 11 00:16:43.728918 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 43166 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:43.732495 sshd[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:43.744169 systemd-logind[1421]: New session 8 of user core. Jul 11 00:16:43.751460 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:16:43.851891 containerd[1445]: time="2025-07-11T00:16:43.851836700Z" level=info msg="StopPodSandbox for \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\"" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.909 [INFO][4555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.909 [INFO][4555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" iface="eth0" netns="/var/run/netns/cni-dbc12ee3-00be-f5a2-62f5-adaa12dc1e7d" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.909 [INFO][4555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" iface="eth0" netns="/var/run/netns/cni-dbc12ee3-00be-f5a2-62f5-adaa12dc1e7d" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.910 [INFO][4555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" iface="eth0" netns="/var/run/netns/cni-dbc12ee3-00be-f5a2-62f5-adaa12dc1e7d" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.910 [INFO][4555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.910 [INFO][4555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.931 [INFO][4565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.932 [INFO][4565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.932 [INFO][4565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.944 [WARNING][4565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.945 [INFO][4565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.946 [INFO][4565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:43.951473 containerd[1445]: 2025-07-11 00:16:43.948 [INFO][4555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:16:43.952910 containerd[1445]: time="2025-07-11T00:16:43.952785445Z" level=info msg="TearDown network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\" successfully" Jul 11 00:16:43.952910 containerd[1445]: time="2025-07-11T00:16:43.952821089Z" level=info msg="StopPodSandbox for \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\" returns successfully" Jul 11 00:16:43.953170 kubelet[2469]: E0711 00:16:43.953142 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:43.954746 containerd[1445]: time="2025-07-11T00:16:43.953698161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mmnh8,Uid:52892df7-6e82-4fd1-8c85-d93129166596,Namespace:kube-system,Attempt:1,}" Jul 11 00:16:43.970897 systemd[1]: run-netns-cni\x2ddbc12ee3\x2d00be\x2df5a2\x2d62f5\x2dadaa12dc1e7d.mount: Deactivated successfully. Jul 11 00:16:44.045240 sshd[4512]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:44.049728 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:43166.service: Deactivated successfully. Jul 11 00:16:44.051934 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:16:44.053883 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:16:44.054782 systemd-logind[1421]: Removed session 8. Jul 11 00:16:44.136472 systemd-networkd[1376]: calib7160a22bbf: Link UP Jul 11 00:16:44.136668 systemd-networkd[1376]: calib7160a22bbf: Gained carrier Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.035 [INFO][4573] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.056 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0 coredns-674b8bbfcf- kube-system 52892df7-6e82-4fd1-8c85-d93129166596 1002 0 2025-07-11 00:16:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mmnh8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib7160a22bbf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-mmnh8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mmnh8-" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.056 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-mmnh8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.077 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" HandleID="k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.078 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" HandleID="k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004deb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mmnh8", "timestamp":"2025-07-11 00:16:44.077933512 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.078 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.078 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.078 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.092 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.098 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.105 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.106 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.108 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.108 [INFO][4589] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.110 [INFO][4589] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6 Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.116 [INFO][4589] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.131 [INFO][4589] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.131 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" host="localhost" Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.131 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:44.164605 containerd[1445]: 2025-07-11 00:16:44.131 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" HandleID="k8s-pod-network.f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:44.165197 containerd[1445]: 2025-07-11 00:16:44.134 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-mmnh8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"52892df7-6e82-4fd1-8c85-d93129166596", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mmnh8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7160a22bbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:44.165197 containerd[1445]: 2025-07-11 00:16:44.134 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-mmnh8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:44.165197 containerd[1445]: 2025-07-11 00:16:44.134 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7160a22bbf ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-mmnh8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:44.165197 containerd[1445]: 2025-07-11 00:16:44.136 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-mmnh8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:44.165197 containerd[1445]: 2025-07-11 00:16:44.137 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-mmnh8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"52892df7-6e82-4fd1-8c85-d93129166596", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6", Pod:"coredns-674b8bbfcf-mmnh8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7160a22bbf", MAC:"f6:89:5b:29:22:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:44.165197 containerd[1445]: 2025-07-11 00:16:44.160 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-mmnh8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:16:44.204160 containerd[1445]: time="2025-07-11T00:16:44.203970773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:44.204160 containerd[1445]: time="2025-07-11T00:16:44.204036341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:44.204160 containerd[1445]: time="2025-07-11T00:16:44.204050743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:44.204320 containerd[1445]: time="2025-07-11T00:16:44.204154156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:44.228031 systemd[1]: Started cri-containerd-f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6.scope - libcontainer container f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6. Jul 11 00:16:44.240037 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:44.257273 containerd[1445]: time="2025-07-11T00:16:44.257226767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mmnh8,Uid:52892df7-6e82-4fd1-8c85-d93129166596,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6\"" Jul 11 00:16:44.258193 kubelet[2469]: E0711 00:16:44.258168 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:44.291905 containerd[1445]: time="2025-07-11T00:16:44.291845960Z" level=info msg="CreateContainer within sandbox \"f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:16:44.478350 containerd[1445]: time="2025-07-11T00:16:44.478227978Z" level=info msg="CreateContainer within sandbox \"f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2b2a75bf92dd51a7e1399b4cc7d0824e290e220502c475fcd95915cfb2547c6\"" Jul 11 00:16:44.480685 containerd[1445]: time="2025-07-11T00:16:44.479783772Z" level=info msg="StartContainer for \"f2b2a75bf92dd51a7e1399b4cc7d0824e290e220502c475fcd95915cfb2547c6\"" Jul 11 00:16:44.515396 systemd[1]: Started cri-containerd-f2b2a75bf92dd51a7e1399b4cc7d0824e290e220502c475fcd95915cfb2547c6.scope - libcontainer container f2b2a75bf92dd51a7e1399b4cc7d0824e290e220502c475fcd95915cfb2547c6. Jul 11 00:16:44.545995 containerd[1445]: time="2025-07-11T00:16:44.545951415Z" level=info msg="StartContainer for \"f2b2a75bf92dd51a7e1399b4cc7d0824e290e220502c475fcd95915cfb2547c6\" returns successfully" Jul 11 00:16:45.060712 kubelet[2469]: E0711 00:16:45.060672 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:45.091170 kubelet[2469]: I0711 00:16:45.091088 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mmnh8" podStartSLOduration=38.091071968 podStartE2EDuration="38.091071968s" podCreationTimestamp="2025-07-11 00:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:45.075758107 +0000 UTC m=+44.334043130" watchObservedRunningTime="2025-07-11 00:16:45.091071968 +0000 UTC m=+44.349356991" Jul 11 00:16:45.263986 systemd-networkd[1376]: calib7160a22bbf: Gained IPv6LL Jul 11 00:16:45.328214 systemd-networkd[1376]: cali517df191baf: Gained IPv6LL Jul 11 00:16:45.391045 systemd-networkd[1376]: cali8a8f6ea5fa5: Gained IPv6LL Jul 11 00:16:45.665917 containerd[1445]: time="2025-07-11T00:16:45.665764125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:45.666524 containerd[1445]: time="2025-07-11T00:16:45.666493974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 11 00:16:45.668891 containerd[1445]: time="2025-07-11T00:16:45.667803173Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:45.672339 containerd[1445]: time="2025-07-11T00:16:45.672289278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:45.673317 containerd[1445]: time="2025-07-11T00:16:45.673276838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.290836103s" Jul 11 00:16:45.673443 containerd[1445]: time="2025-07-11T00:16:45.673424976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:16:45.674656 containerd[1445]: time="2025-07-11T00:16:45.674627842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:16:45.677933 containerd[1445]: time="2025-07-11T00:16:45.677896400Z" level=info msg="CreateContainer within sandbox \"d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:16:45.694871 containerd[1445]: time="2025-07-11T00:16:45.694811295Z" level=info msg="CreateContainer within sandbox \"d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cecbc5842b0b543576a1fef75caa766ab05049d8123725eea2bbc9c3ac78ddf5\"" Jul 11 00:16:45.695381 containerd[1445]: time="2025-07-11T00:16:45.695345320Z" level=info msg="StartContainer for \"cecbc5842b0b543576a1fef75caa766ab05049d8123725eea2bbc9c3ac78ddf5\"" Jul 11 00:16:45.746124 systemd[1]: Started cri-containerd-cecbc5842b0b543576a1fef75caa766ab05049d8123725eea2bbc9c3ac78ddf5.scope - libcontainer container cecbc5842b0b543576a1fef75caa766ab05049d8123725eea2bbc9c3ac78ddf5. Jul 11 00:16:45.817862 containerd[1445]: time="2025-07-11T00:16:45.817806122Z" level=info msg="StartContainer for \"cecbc5842b0b543576a1fef75caa766ab05049d8123725eea2bbc9c3ac78ddf5\" returns successfully" Jul 11 00:16:45.840981 containerd[1445]: time="2025-07-11T00:16:45.840030942Z" level=info msg="StopPodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\"" Jul 11 00:16:45.840981 containerd[1445]: time="2025-07-11T00:16:45.840268851Z" level=info msg="StopPodSandbox for \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\"" Jul 11 00:16:45.840981 containerd[1445]: time="2025-07-11T00:16:45.840406028Z" level=info msg="StopPodSandbox for \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\"" Jul 11 00:16:45.841644 containerd[1445]: time="2025-07-11T00:16:45.841402629Z" level=info msg="StopPodSandbox for \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\"" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.942 [INFO][4797] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.942 [INFO][4797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" iface="eth0" netns="/var/run/netns/cni-936fb542-08ec-ca7a-e6f9-a989b2bd2261" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.942 [INFO][4797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" iface="eth0" netns="/var/run/netns/cni-936fb542-08ec-ca7a-e6f9-a989b2bd2261" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.943 [INFO][4797] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" iface="eth0" netns="/var/run/netns/cni-936fb542-08ec-ca7a-e6f9-a989b2bd2261" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.943 [INFO][4797] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.943 [INFO][4797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.994 [INFO][4836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.994 [INFO][4836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:45.994 [INFO][4836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:46.014 [WARNING][4836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:46.014 [INFO][4836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:46.017 [INFO][4836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.047834 containerd[1445]: 2025-07-11 00:16:46.030 [INFO][4797] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:16:46.052005 containerd[1445]: time="2025-07-11T00:16:46.050373002Z" level=info msg="TearDown network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\" successfully" Jul 11 00:16:46.052005 containerd[1445]: time="2025-07-11T00:16:46.051958990Z" level=info msg="StopPodSandbox for \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\" returns successfully" Jul 11 00:16:46.051967 systemd[1]: run-netns-cni\x2d936fb542\x2d08ec\x2dca7a\x2de6f9\x2da989b2bd2261.mount: Deactivated successfully. Jul 11 00:16:46.054146 containerd[1445]: time="2025-07-11T00:16:46.054105124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9544685f-msjqm,Uid:e396f44e-1f83-4d6d-a81a-6662c419d2df,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:16:46.071207 kubelet[2469]: E0711 00:16:46.070765 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:45.986 [INFO][4802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:45.987 [INFO][4802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" iface="eth0" netns="/var/run/netns/cni-9bd4f511-72d7-87ee-e0d5-d13e77210fc9" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:45.987 [INFO][4802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" iface="eth0" netns="/var/run/netns/cni-9bd4f511-72d7-87ee-e0d5-d13e77210fc9" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:45.988 [INFO][4802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" iface="eth0" netns="/var/run/netns/cni-9bd4f511-72d7-87ee-e0d5-d13e77210fc9" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:45.988 [INFO][4802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:45.988 [INFO][4802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:46.053 [INFO][4858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:46.054 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:46.054 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:46.066 [WARNING][4858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:46.066 [INFO][4858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:46.077 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.091868 containerd[1445]: 2025-07-11 00:16:46.081 [INFO][4802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:16:46.093524 containerd[1445]: time="2025-07-11T00:16:46.092160200Z" level=info msg="TearDown network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" successfully" Jul 11 00:16:46.093524 containerd[1445]: time="2025-07-11T00:16:46.092194924Z" level=info msg="StopPodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" returns successfully" Jul 11 00:16:46.093524 containerd[1445]: time="2025-07-11T00:16:46.093102072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9544685f-czxjh,Uid:f6ab768e-fd1a-4783-bee2-e85ef4dae0dc,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:16:46.095597 systemd[1]: run-netns-cni\x2d9bd4f511\x2d72d7\x2d87ee\x2de0d5\x2dd13e77210fc9.mount: Deactivated successfully. Jul 11 00:16:46.114921 kubelet[2469]: I0711 00:16:46.114790 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77dc4685dc-gc67k" podStartSLOduration=25.822614219 podStartE2EDuration="28.114649709s" podCreationTimestamp="2025-07-11 00:16:18 +0000 UTC" firstStartedPulling="2025-07-11 00:16:43.382105693 +0000 UTC m=+42.640390676" lastFinishedPulling="2025-07-11 00:16:45.674141143 +0000 UTC m=+44.932426166" observedRunningTime="2025-07-11 00:16:46.112028398 +0000 UTC m=+45.370313421" watchObservedRunningTime="2025-07-11 00:16:46.114649709 +0000 UTC m=+45.372934732" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:45.973 [INFO][4806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:45.973 [INFO][4806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" iface="eth0" netns="/var/run/netns/cni-b7421a98-38d5-c8c6-62b5-000c40c7333f" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:45.973 [INFO][4806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" iface="eth0" netns="/var/run/netns/cni-b7421a98-38d5-c8c6-62b5-000c40c7333f" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:45.974 [INFO][4806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" iface="eth0" netns="/var/run/netns/cni-b7421a98-38d5-c8c6-62b5-000c40c7333f" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:45.974 [INFO][4806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:45.974 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:46.066 [INFO][4851] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:46.067 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:46.090 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:46.122 [WARNING][4851] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:46.122 [INFO][4851] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:46.126 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.141115 containerd[1445]: 2025-07-11 00:16:46.130 [INFO][4806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:16:46.142156 containerd[1445]: time="2025-07-11T00:16:46.142116648Z" level=info msg="TearDown network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\" successfully" Jul 11 00:16:46.142156 containerd[1445]: time="2025-07-11T00:16:46.142153332Z" level=info msg="StopPodSandbox for \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\" returns successfully" Jul 11 00:16:46.145691 containerd[1445]: time="2025-07-11T00:16:46.145578979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v292t,Uid:9fab7f82-393a-41e4-a999-9430044f6a22,Namespace:calico-system,Attempt:1,}" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.006 [INFO][4809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.007 [INFO][4809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" iface="eth0" netns="/var/run/netns/cni-c727fbbd-3c15-e931-9a3f-a3cb3c7348c2" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.007 [INFO][4809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" iface="eth0" netns="/var/run/netns/cni-c727fbbd-3c15-e931-9a3f-a3cb3c7348c2" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.008 [INFO][4809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" iface="eth0" netns="/var/run/netns/cni-c727fbbd-3c15-e931-9a3f-a3cb3c7348c2" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.008 [INFO][4809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.008 [INFO][4809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.076 [INFO][4866] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.076 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.128 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.145 [WARNING][4866] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.145 [INFO][4866] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.148 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.162949 containerd[1445]: 2025-07-11 00:16:46.154 [INFO][4809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:16:46.163959 containerd[1445]: time="2025-07-11T00:16:46.163854507Z" level=info msg="TearDown network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\" successfully" Jul 11 00:16:46.164392 containerd[1445]: time="2025-07-11T00:16:46.164168784Z" level=info msg="StopPodSandbox for \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\" returns successfully" Jul 11 00:16:46.165247 kubelet[2469]: E0711 00:16:46.165184 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:46.166195 containerd[1445]: time="2025-07-11T00:16:46.165829901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxqzj,Uid:67c0bfe2-c5c5-48ac-a593-483d9d147ed4,Namespace:kube-system,Attempt:1,}" Jul 11 00:16:46.311533 systemd-networkd[1376]: cali2aed5f2c366: Link UP Jul 11 00:16:46.311680 systemd-networkd[1376]: cali2aed5f2c366: Gained carrier Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.130 [INFO][4885] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.153 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0 calico-apiserver-6f9544685f- calico-apiserver e396f44e-1f83-4d6d-a81a-6662c419d2df 1035 0 2025-07-11 00:16:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f9544685f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f9544685f-msjqm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2aed5f2c366 [] [] }} ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-msjqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--msjqm-" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.154 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-msjqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.229 [INFO][4930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" HandleID="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.229 [INFO][4930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" HandleID="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502ad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f9544685f-msjqm", "timestamp":"2025-07-11 00:16:46.228633394 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.229 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.229 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.229 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.254 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.263 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.280 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.284 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.287 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.287 [INFO][4930] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.289 [INFO][4930] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.296 [INFO][4930] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.303 [INFO][4930] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.303 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" host="localhost" Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.303 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.337783 containerd[1445]: 2025-07-11 00:16:46.303 [INFO][4930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" HandleID="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.338668 containerd[1445]: 2025-07-11 00:16:46.307 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-msjqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e396f44e-1f83-4d6d-a81a-6662c419d2df", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f9544685f-msjqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2aed5f2c366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:46.338668 containerd[1445]: 2025-07-11 00:16:46.307 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-msjqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.338668 containerd[1445]: 2025-07-11 00:16:46.308 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2aed5f2c366 ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-msjqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.338668 containerd[1445]: 2025-07-11 00:16:46.315 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-msjqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.338668 containerd[1445]: 2025-07-11 00:16:46.318 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-msjqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e396f44e-1f83-4d6d-a81a-6662c419d2df", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a", Pod:"calico-apiserver-6f9544685f-msjqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2aed5f2c366", MAC:"82:6a:6b:db:0c:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:46.338668 containerd[1445]: 2025-07-11 00:16:46.333 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-msjqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:16:46.356975 containerd[1445]: time="2025-07-11T00:16:46.356357309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:46.357315 containerd[1445]: time="2025-07-11T00:16:46.357260896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:46.357315 containerd[1445]: time="2025-07-11T00:16:46.357284779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:46.357441 containerd[1445]: time="2025-07-11T00:16:46.357390752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:46.381169 systemd[1]: Started cri-containerd-7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a.scope - libcontainer container 7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a. Jul 11 00:16:46.397694 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:46.418444 containerd[1445]: time="2025-07-11T00:16:46.418094475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9544685f-msjqm,Uid:e396f44e-1f83-4d6d-a81a-6662c419d2df,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a\"" Jul 11 00:16:46.426196 containerd[1445]: time="2025-07-11T00:16:46.425834473Z" level=info msg="CreateContainer within sandbox \"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:16:46.434260 systemd-networkd[1376]: cali08346c5afa5: Link UP Jul 11 00:16:46.434575 systemd-networkd[1376]: cali08346c5afa5: Gained carrier Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.171 [INFO][4900] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.199 [INFO][4900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0 calico-apiserver-6f9544685f- calico-apiserver f6ab768e-fd1a-4783-bee2-e85ef4dae0dc 1037 0 2025-07-11 00:16:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f9544685f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f9544685f-czxjh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali08346c5afa5 [] [] }} ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-czxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--czxjh-" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.199 [INFO][4900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-czxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.258 [INFO][4960] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" HandleID="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.259 [INFO][4960] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" HandleID="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f9544685f-czxjh", "timestamp":"2025-07-11 00:16:46.258545343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.261 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.307 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.307 [INFO][4960] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.352 [INFO][4960] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.383 [INFO][4960] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.390 [INFO][4960] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.392 [INFO][4960] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.396 [INFO][4960] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.396 [INFO][4960] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.398 [INFO][4960] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602 Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.417 [INFO][4960] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.427 [INFO][4960] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.427 [INFO][4960] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" host="localhost" Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.428 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.452636 containerd[1445]: 2025-07-11 00:16:46.428 [INFO][4960] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" HandleID="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.453364 containerd[1445]: 2025-07-11 00:16:46.430 [INFO][4900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-czxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f9544685f-czxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08346c5afa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:46.453364 containerd[1445]: 2025-07-11 00:16:46.431 [INFO][4900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-czxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.453364 containerd[1445]: 2025-07-11 00:16:46.431 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08346c5afa5 ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-czxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.453364 containerd[1445]: 2025-07-11 00:16:46.436 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-czxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.453364 containerd[1445]: 2025-07-11 00:16:46.436 [INFO][4900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-czxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602", Pod:"calico-apiserver-6f9544685f-czxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08346c5afa5", MAC:"52:22:e1:0d:3b:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:46.453364 containerd[1445]: 2025-07-11 00:16:46.448 [INFO][4900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Namespace="calico-apiserver" Pod="calico-apiserver-6f9544685f-czxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:16:46.455647 containerd[1445]: time="2025-07-11T00:16:46.455602325Z" level=info msg="CreateContainer within sandbox \"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\"" Jul 11 00:16:46.457658 containerd[1445]: time="2025-07-11T00:16:46.456953405Z" level=info msg="StartContainer for \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\"" Jul 11 00:16:46.487972 containerd[1445]: time="2025-07-11T00:16:46.487465186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:46.487972 containerd[1445]: time="2025-07-11T00:16:46.487539395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:46.487972 containerd[1445]: time="2025-07-11T00:16:46.487597482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:46.487972 containerd[1445]: time="2025-07-11T00:16:46.487788304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:46.490074 systemd[1]: Started cri-containerd-419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807.scope - libcontainer container 419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807. Jul 11 00:16:46.530956 systemd-networkd[1376]: caliab2ece37036: Link UP Jul 11 00:16:46.531194 systemd[1]: Started cri-containerd-dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602.scope - libcontainer container dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602. Jul 11 00:16:46.531298 systemd-networkd[1376]: caliab2ece37036: Gained carrier Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.210 [INFO][4921] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.241 [INFO][4921] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v292t-eth0 csi-node-driver- calico-system 9fab7f82-393a-41e4-a999-9430044f6a22 1036 0 2025-07-11 00:16:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-v292t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliab2ece37036 [] [] }} ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Namespace="calico-system" Pod="csi-node-driver-v292t" WorkloadEndpoint="localhost-k8s-csi--node--driver--v292t-" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.241 [INFO][4921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Namespace="calico-system" Pod="csi-node-driver-v292t" WorkloadEndpoint="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.280 [INFO][4972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" HandleID="k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.280 [INFO][4972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" HandleID="k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Workload="localhost-k8s-csi--node--driver--v292t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v292t", "timestamp":"2025-07-11 00:16:46.280129544 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.280 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.428 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.428 [INFO][4972] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.451 [INFO][4972] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.484 [INFO][4972] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.493 [INFO][4972] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.496 [INFO][4972] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.499 [INFO][4972] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.499 [INFO][4972] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.501 [INFO][4972] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18 Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.505 [INFO][4972] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.514 [INFO][4972] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.514 [INFO][4972] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" host="localhost" Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.514 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.546295 containerd[1445]: 2025-07-11 00:16:46.514 [INFO][4972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" HandleID="k8s-pod-network.b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.547428 containerd[1445]: 2025-07-11 00:16:46.528 [INFO][4921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Namespace="calico-system" Pod="csi-node-driver-v292t" WorkloadEndpoint="localhost-k8s-csi--node--driver--v292t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v292t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fab7f82-393a-41e4-a999-9430044f6a22", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v292t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab2ece37036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:46.547428 containerd[1445]: 2025-07-11 00:16:46.528 [INFO][4921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Namespace="calico-system" Pod="csi-node-driver-v292t" WorkloadEndpoint="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.547428 containerd[1445]: 2025-07-11 00:16:46.528 [INFO][4921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab2ece37036 ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Namespace="calico-system" Pod="csi-node-driver-v292t" WorkloadEndpoint="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.547428 containerd[1445]: 2025-07-11 00:16:46.531 [INFO][4921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Namespace="calico-system" Pod="csi-node-driver-v292t" WorkloadEndpoint="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.547428 containerd[1445]: 2025-07-11 00:16:46.532 [INFO][4921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Namespace="calico-system" Pod="csi-node-driver-v292t" WorkloadEndpoint="localhost-k8s-csi--node--driver--v292t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v292t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fab7f82-393a-41e4-a999-9430044f6a22", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18", Pod:"csi-node-driver-v292t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab2ece37036", MAC:"aa:89:e2:82:64:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:46.547428 containerd[1445]: 2025-07-11 00:16:46.543 [INFO][4921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18" Namespace="calico-system" Pod="csi-node-driver-v292t" WorkloadEndpoint="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:16:46.565471 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:46.582787 containerd[1445]: time="2025-07-11T00:16:46.582511544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:46.582787 containerd[1445]: time="2025-07-11T00:16:46.582647880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:46.582787 containerd[1445]: time="2025-07-11T00:16:46.582674843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:46.584765 containerd[1445]: time="2025-07-11T00:16:46.583205226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:46.589178 containerd[1445]: time="2025-07-11T00:16:46.589140971Z" level=info msg="StartContainer for \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\" returns successfully" Jul 11 00:16:46.595331 containerd[1445]: time="2025-07-11T00:16:46.595291220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9544685f-czxjh,Uid:f6ab768e-fd1a-4783-bee2-e85ef4dae0dc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602\"" Jul 11 00:16:46.601143 containerd[1445]: time="2025-07-11T00:16:46.600598530Z" level=info msg="CreateContainer within sandbox \"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:16:46.623231 systemd[1]: Started cri-containerd-b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18.scope - libcontainer container b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18. Jul 11 00:16:46.623754 containerd[1445]: time="2025-07-11T00:16:46.623711033Z" level=info msg="CreateContainer within sandbox \"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\"" Jul 11 00:16:46.628092 containerd[1445]: time="2025-07-11T00:16:46.627667302Z" level=info msg="StartContainer for \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\"" Jul 11 00:16:46.641645 systemd-networkd[1376]: cali66672ab4cd0: Link UP Jul 11 00:16:46.644335 systemd-networkd[1376]: cali66672ab4cd0: Gained carrier Jul 11 00:16:46.660574 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.237 [INFO][4947] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.263 [INFO][4947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0 coredns-674b8bbfcf- kube-system 67c0bfe2-c5c5-48ac-a593-483d9d147ed4 1038 0 2025-07-11 00:16:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fxqzj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66672ab4cd0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxqzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxqzj-" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.263 [INFO][4947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxqzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.308 [INFO][4981] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" HandleID="k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.309 [INFO][4981] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" HandleID="k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fxqzj", "timestamp":"2025-07-11 00:16:46.308929281 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.309 [INFO][4981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.516 [INFO][4981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.517 [INFO][4981] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.552 [INFO][4981] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.584 [INFO][4981] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.597 [INFO][4981] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.602 [INFO][4981] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.607 [INFO][4981] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.607 [INFO][4981] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.610 [INFO][4981] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.617 [INFO][4981] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.627 [INFO][4981] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.627 [INFO][4981] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" host="localhost" Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.627 [INFO][4981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.667083 containerd[1445]: 2025-07-11 00:16:46.627 [INFO][4981] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" HandleID="k8s-pod-network.064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.667832 containerd[1445]: 2025-07-11 00:16:46.637 [INFO][4947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxqzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67c0bfe2-c5c5-48ac-a593-483d9d147ed4", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fxqzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66672ab4cd0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:46.667832 containerd[1445]: 2025-07-11 00:16:46.637 [INFO][4947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxqzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.667832 containerd[1445]: 2025-07-11 00:16:46.637 [INFO][4947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66672ab4cd0 ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxqzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.667832 containerd[1445]: 2025-07-11 00:16:46.646 [INFO][4947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxqzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.667832 containerd[1445]: 2025-07-11 00:16:46.648 [INFO][4947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxqzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67c0bfe2-c5c5-48ac-a593-483d9d147ed4", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be", Pod:"coredns-674b8bbfcf-fxqzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66672ab4cd0", MAC:"aa:9c:5a:4f:e9:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:46.667832 containerd[1445]: 2025-07-11 00:16:46.658 [INFO][4947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxqzj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:16:46.672076 systemd[1]: Started cri-containerd-ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4.scope - libcontainer container ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4. Jul 11 00:16:46.697557 containerd[1445]: time="2025-07-11T00:16:46.697517870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v292t,Uid:9fab7f82-393a-41e4-a999-9430044f6a22,Namespace:calico-system,Attempt:1,} returns sandbox id \"b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18\"" Jul 11 00:16:46.705017 containerd[1445]: time="2025-07-11T00:16:46.703996239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:46.708948 containerd[1445]: time="2025-07-11T00:16:46.705003799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:46.708948 containerd[1445]: time="2025-07-11T00:16:46.705632753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:46.708948 containerd[1445]: time="2025-07-11T00:16:46.705733165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:46.735486 containerd[1445]: time="2025-07-11T00:16:46.735445131Z" level=info msg="StartContainer for \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\" returns successfully" Jul 11 00:16:46.741101 systemd[1]: Started cri-containerd-064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be.scope - libcontainer container 064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be. Jul 11 00:16:46.753581 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:46.774167 containerd[1445]: time="2025-07-11T00:16:46.774124320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxqzj,Uid:67c0bfe2-c5c5-48ac-a593-483d9d147ed4,Namespace:kube-system,Attempt:1,} returns sandbox id \"064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be\"" Jul 11 00:16:46.775476 kubelet[2469]: E0711 00:16:46.775450 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:46.781177 containerd[1445]: time="2025-07-11T00:16:46.781136992Z" level=info msg="CreateContainer within sandbox \"064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:16:46.794678 containerd[1445]: time="2025-07-11T00:16:46.794616272Z" level=info msg="CreateContainer within sandbox \"064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30a9450e804716b3351edcfc509ea8756fdbb375e44d6079c5086bd8a6bbd3fc\"" Jul 11 00:16:46.795192 containerd[1445]: time="2025-07-11T00:16:46.795084967Z" level=info msg="StartContainer for \"30a9450e804716b3351edcfc509ea8756fdbb375e44d6079c5086bd8a6bbd3fc\"" Jul 11 00:16:46.826099 systemd[1]: Started cri-containerd-30a9450e804716b3351edcfc509ea8756fdbb375e44d6079c5086bd8a6bbd3fc.scope - libcontainer container 30a9450e804716b3351edcfc509ea8756fdbb375e44d6079c5086bd8a6bbd3fc. Jul 11 00:16:46.839222 containerd[1445]: time="2025-07-11T00:16:46.839164598Z" level=info msg="StopPodSandbox for \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\"" Jul 11 00:16:46.862412 containerd[1445]: time="2025-07-11T00:16:46.862370871Z" level=info msg="StartContainer for \"30a9450e804716b3351edcfc509ea8756fdbb375e44d6079c5086bd8a6bbd3fc\" returns successfully" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.917 [INFO][5303] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.918 [INFO][5303] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" iface="eth0" netns="/var/run/netns/cni-810c4ac9-33e9-edd3-b023-731f9a01b839" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.918 [INFO][5303] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" iface="eth0" netns="/var/run/netns/cni-810c4ac9-33e9-edd3-b023-731f9a01b839" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.918 [INFO][5303] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" iface="eth0" netns="/var/run/netns/cni-810c4ac9-33e9-edd3-b023-731f9a01b839" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.919 [INFO][5303] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.919 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.953 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.953 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.953 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.967 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.967 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.970 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:46.984758 containerd[1445]: 2025-07-11 00:16:46.980 [INFO][5303] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:16:46.985253 containerd[1445]: time="2025-07-11T00:16:46.984917212Z" level=info msg="TearDown network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\" successfully" Jul 11 00:16:46.985253 containerd[1445]: time="2025-07-11T00:16:46.984943815Z" level=info msg="StopPodSandbox for \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\" returns successfully" Jul 11 00:16:46.986005 containerd[1445]: time="2025-07-11T00:16:46.985952215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lf8r5,Uid:58c7686e-2053-4b0c-9e02-052b8ed1eb7b,Namespace:calico-system,Attempt:1,}" Jul 11 00:16:47.068400 systemd[1]: run-netns-cni\x2dc727fbbd\x2d3c15\x2de931\x2d9a3f\x2da3cb3c7348c2.mount: Deactivated successfully. Jul 11 00:16:47.068508 systemd[1]: run-netns-cni\x2d810c4ac9\x2d33e9\x2dedd3\x2db023\x2d731f9a01b839.mount: Deactivated successfully. Jul 11 00:16:47.068568 systemd[1]: run-netns-cni\x2db7421a98\x2d38d5\x2dc8c6\x2d62b5\x2d000c40c7333f.mount: Deactivated successfully. Jul 11 00:16:47.087515 kubelet[2469]: E0711 00:16:47.086322 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:47.101862 kubelet[2469]: E0711 00:16:47.101533 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:47.103304 kubelet[2469]: I0711 00:16:47.103260 2469 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:16:47.106057 kubelet[2469]: I0711 00:16:47.106000 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f9544685f-czxjh" podStartSLOduration=30.105985256 podStartE2EDuration="30.105985256s" podCreationTimestamp="2025-07-11 00:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:47.105889685 +0000 UTC m=+46.364174708" watchObservedRunningTime="2025-07-11 00:16:47.105985256 +0000 UTC m=+46.364270279" Jul 11 00:16:47.124963 kubelet[2469]: I0711 00:16:47.124908 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f9544685f-msjqm" podStartSLOduration=30.124891448 podStartE2EDuration="30.124891448s" podCreationTimestamp="2025-07-11 00:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:47.12473179 +0000 UTC m=+46.383016813" watchObservedRunningTime="2025-07-11 00:16:47.124891448 +0000 UTC m=+46.383176471" Jul 11 00:16:47.195156 systemd-networkd[1376]: cali4f234a36eb1: Link UP Jul 11 00:16:47.197317 systemd-networkd[1376]: cali4f234a36eb1: Gained carrier Jul 11 00:16:47.212461 kubelet[2469]: I0711 00:16:47.212156 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fxqzj" podStartSLOduration=40.212138127 podStartE2EDuration="40.212138127s" podCreationTimestamp="2025-07-11 00:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:47.143642863 +0000 UTC m=+46.401927886" watchObservedRunningTime="2025-07-11 00:16:47.212138127 +0000 UTC m=+46.470423150" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.058 [INFO][5333] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.082 [INFO][5333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0 goldmane-768f4c5c69- calico-system 58c7686e-2053-4b0c-9e02-052b8ed1eb7b 1074 0 2025-07-11 00:16:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-lf8r5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4f234a36eb1 [] [] }} ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Namespace="calico-system" Pod="goldmane-768f4c5c69-lf8r5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lf8r5-" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.082 [INFO][5333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Namespace="calico-system" Pod="goldmane-768f4c5c69-lf8r5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.133 [INFO][5346] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" HandleID="k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.133 [INFO][5346] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" HandleID="k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-lf8r5", "timestamp":"2025-07-11 00:16:47.133107161 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.133 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.133 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.133 [INFO][5346] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.144 [INFO][5346] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.153 [INFO][5346] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.164 [INFO][5346] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.168 [INFO][5346] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.172 [INFO][5346] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.172 [INFO][5346] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.174 [INFO][5346] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.179 [INFO][5346] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.188 [INFO][5346] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.188 [INFO][5346] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" host="localhost" Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.188 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:16:47.216712 containerd[1445]: 2025-07-11 00:16:47.188 [INFO][5346] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" HandleID="k8s-pod-network.7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:47.217250 containerd[1445]: 2025-07-11 00:16:47.190 [INFO][5333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Namespace="calico-system" Pod="goldmane-768f4c5c69-lf8r5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"58c7686e-2053-4b0c-9e02-052b8ed1eb7b", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-lf8r5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f234a36eb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:47.217250 containerd[1445]: 2025-07-11 00:16:47.191 [INFO][5333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Namespace="calico-system" Pod="goldmane-768f4c5c69-lf8r5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:47.217250 containerd[1445]: 2025-07-11 00:16:47.191 [INFO][5333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f234a36eb1 ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Namespace="calico-system" Pod="goldmane-768f4c5c69-lf8r5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:47.217250 containerd[1445]: 2025-07-11 00:16:47.199 [INFO][5333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Namespace="calico-system" Pod="goldmane-768f4c5c69-lf8r5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:47.217250 containerd[1445]: 2025-07-11 00:16:47.201 [INFO][5333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Namespace="calico-system" Pod="goldmane-768f4c5c69-lf8r5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"58c7686e-2053-4b0c-9e02-052b8ed1eb7b", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e", Pod:"goldmane-768f4c5c69-lf8r5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f234a36eb1", MAC:"52:68:ba:a9:ea:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:16:47.217250 containerd[1445]: 2025-07-11 00:16:47.211 [INFO][5333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e" Namespace="calico-system" Pod="goldmane-768f4c5c69-lf8r5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:16:47.241885 containerd[1445]: time="2025-07-11T00:16:47.239105294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:47.241885 containerd[1445]: time="2025-07-11T00:16:47.239156740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:47.241885 containerd[1445]: time="2025-07-11T00:16:47.239168341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:47.241885 containerd[1445]: time="2025-07-11T00:16:47.239250431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:47.269432 systemd[1]: Started cri-containerd-7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e.scope - libcontainer container 7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e. Jul 11 00:16:47.308802 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:16:47.345224 containerd[1445]: time="2025-07-11T00:16:47.344076668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lf8r5,Uid:58c7686e-2053-4b0c-9e02-052b8ed1eb7b,Namespace:calico-system,Attempt:1,} returns sandbox id \"7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e\"" Jul 11 00:16:48.015067 systemd-networkd[1376]: cali66672ab4cd0: Gained IPv6LL Jul 11 00:16:48.107543 kubelet[2469]: I0711 00:16:48.107508 2469 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:16:48.108145 kubelet[2469]: E0711 00:16:48.108114 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:48.186373 containerd[1445]: time="2025-07-11T00:16:48.186319921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:48.187674 containerd[1445]: time="2025-07-11T00:16:48.187639391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 11 00:16:48.189368 containerd[1445]: time="2025-07-11T00:16:48.189326502Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:48.195755 containerd[1445]: time="2025-07-11T00:16:48.195704626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:48.197750 containerd[1445]: time="2025-07-11T00:16:48.197709174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.523044646s" Jul 11 00:16:48.197750 containerd[1445]: time="2025-07-11T00:16:48.197750778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 11 00:16:48.200946 containerd[1445]: time="2025-07-11T00:16:48.200350793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:16:48.213528 containerd[1445]: time="2025-07-11T00:16:48.213489764Z" level=info msg="CreateContainer within sandbox \"3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:16:48.228696 containerd[1445]: time="2025-07-11T00:16:48.228043095Z" level=info msg="CreateContainer within sandbox \"3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"af582f57926b14faf01409255a5877004e5c137ec819990568fc201075cad09a\"" Jul 11 00:16:48.229352 containerd[1445]: time="2025-07-11T00:16:48.229318960Z" level=info msg="StartContainer for \"af582f57926b14faf01409255a5877004e5c137ec819990568fc201075cad09a\"" Jul 11 00:16:48.271072 systemd-networkd[1376]: cali2aed5f2c366: Gained IPv6LL Jul 11 00:16:48.283113 systemd[1]: Started cri-containerd-af582f57926b14faf01409255a5877004e5c137ec819990568fc201075cad09a.scope - libcontainer container af582f57926b14faf01409255a5877004e5c137ec819990568fc201075cad09a. Jul 11 00:16:48.335050 systemd-networkd[1376]: cali08346c5afa5: Gained IPv6LL Jul 11 00:16:48.437583 containerd[1445]: time="2025-07-11T00:16:48.437213067Z" level=info msg="StartContainer for \"af582f57926b14faf01409255a5877004e5c137ec819990568fc201075cad09a\" returns successfully" Jul 11 00:16:48.464974 systemd-networkd[1376]: caliab2ece37036: Gained IPv6LL Jul 11 00:16:48.527777 systemd-networkd[1376]: cali4f234a36eb1: Gained IPv6LL Jul 11 00:16:49.070188 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:43172.service - OpenSSH per-connection server daemon (10.0.0.1:43172). Jul 11 00:16:49.115132 sshd[5511]: Accepted publickey for core from 10.0.0.1 port 43172 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:49.115831 kubelet[2469]: E0711 00:16:49.115647 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:49.116582 sshd[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:49.125532 systemd-logind[1421]: New session 9 of user core. Jul 11 00:16:49.129041 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:16:49.429750 sshd[5511]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:49.432466 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:43172.service: Deactivated successfully. Jul 11 00:16:49.434675 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:16:49.436296 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:16:49.438028 systemd-logind[1421]: Removed session 9. Jul 11 00:16:49.798215 containerd[1445]: time="2025-07-11T00:16:49.798159354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:49.798742 containerd[1445]: time="2025-07-11T00:16:49.798708775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 11 00:16:49.801921 containerd[1445]: time="2025-07-11T00:16:49.801866086Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:49.804384 containerd[1445]: time="2025-07-11T00:16:49.804335240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:49.804980 containerd[1445]: time="2025-07-11T00:16:49.804948188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.604559991s" Jul 11 00:16:49.805178 containerd[1445]: time="2025-07-11T00:16:49.805075482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 11 00:16:49.805908 containerd[1445]: time="2025-07-11T00:16:49.805861250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:16:49.808902 containerd[1445]: time="2025-07-11T00:16:49.808844381Z" level=info msg="CreateContainer within sandbox \"b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:16:49.840903 containerd[1445]: time="2025-07-11T00:16:49.840827534Z" level=info msg="CreateContainer within sandbox \"b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a21535b9ae572534f54679d4bf7209333051b0ba0128a70de978f62c88861d2b\"" Jul 11 00:16:49.842503 containerd[1445]: time="2025-07-11T00:16:49.842461236Z" level=info msg="StartContainer for \"a21535b9ae572534f54679d4bf7209333051b0ba0128a70de978f62c88861d2b\"" Jul 11 00:16:49.878414 kubelet[2469]: I0711 00:16:49.878055 2469 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:16:49.878414 kubelet[2469]: E0711 00:16:49.878364 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:49.878112 systemd[1]: run-containerd-runc-k8s.io-a21535b9ae572534f54679d4bf7209333051b0ba0128a70de978f62c88861d2b-runc.BKSYPM.mount: Deactivated successfully. Jul 11 00:16:49.890478 systemd[1]: Started cri-containerd-a21535b9ae572534f54679d4bf7209333051b0ba0128a70de978f62c88861d2b.scope - libcontainer container a21535b9ae572534f54679d4bf7209333051b0ba0128a70de978f62c88861d2b. Jul 11 00:16:49.932253 kubelet[2469]: I0711 00:16:49.932184 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f5d7d7856-pzcbk" podStartSLOduration=24.211350465 podStartE2EDuration="28.932170602s" podCreationTimestamp="2025-07-11 00:16:21 +0000 UTC" firstStartedPulling="2025-07-11 00:16:43.478789132 +0000 UTC m=+42.737074155" lastFinishedPulling="2025-07-11 00:16:48.199609269 +0000 UTC m=+47.457894292" observedRunningTime="2025-07-11 00:16:49.125476701 +0000 UTC m=+48.383761724" watchObservedRunningTime="2025-07-11 00:16:49.932170602 +0000 UTC m=+49.190455625" Jul 11 00:16:49.936664 containerd[1445]: time="2025-07-11T00:16:49.936576772Z" level=info msg="StartContainer for \"a21535b9ae572534f54679d4bf7209333051b0ba0128a70de978f62c88861d2b\" returns successfully" Jul 11 00:16:50.118279 kubelet[2469]: E0711 00:16:50.118165 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:50.928923 kernel: bpftool[5671]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:16:51.140949 systemd-networkd[1376]: vxlan.calico: Link UP Jul 11 00:16:51.140958 systemd-networkd[1376]: vxlan.calico: Gained carrier Jul 11 00:16:52.215866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937442176.mount: Deactivated successfully. Jul 11 00:16:52.571752 containerd[1445]: time="2025-07-11T00:16:52.571689591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 11 00:16:52.576583 containerd[1445]: time="2025-07-11T00:16:52.576201705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.770280088s" Jul 11 00:16:52.576583 containerd[1445]: time="2025-07-11T00:16:52.576239829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 11 00:16:52.579139 containerd[1445]: time="2025-07-11T00:16:52.577855038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:16:52.579942 containerd[1445]: time="2025-07-11T00:16:52.579891492Z" level=info msg="CreateContainer within sandbox \"7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:16:52.580239 containerd[1445]: time="2025-07-11T00:16:52.580206925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:52.591504 containerd[1445]: time="2025-07-11T00:16:52.591461385Z" level=info msg="CreateContainer within sandbox \"7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2da42fb591b9e45ce8fdecbf64c17c9b5a4b7f0a88c9fe3f08ea846b5ed57350\"" Jul 11 00:16:52.593913 containerd[1445]: time="2025-07-11T00:16:52.592154818Z" level=info msg="StartContainer for \"2da42fb591b9e45ce8fdecbf64c17c9b5a4b7f0a88c9fe3f08ea846b5ed57350\"" Jul 11 00:16:52.593913 containerd[1445]: time="2025-07-11T00:16:52.592677432Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:52.593913 containerd[1445]: time="2025-07-11T00:16:52.593365665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:52.625048 systemd[1]: Started cri-containerd-2da42fb591b9e45ce8fdecbf64c17c9b5a4b7f0a88c9fe3f08ea846b5ed57350.scope - libcontainer container 2da42fb591b9e45ce8fdecbf64c17c9b5a4b7f0a88c9fe3f08ea846b5ed57350. Jul 11 00:16:52.658183 containerd[1445]: time="2025-07-11T00:16:52.658140657Z" level=info msg="StartContainer for \"2da42fb591b9e45ce8fdecbf64c17c9b5a4b7f0a88c9fe3f08ea846b5ed57350\" returns successfully" Jul 11 00:16:52.687029 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jul 11 00:16:53.144629 kubelet[2469]: I0711 00:16:53.141987 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-lf8r5" podStartSLOduration=25.913331356 podStartE2EDuration="31.141970258s" podCreationTimestamp="2025-07-11 00:16:22 +0000 UTC" firstStartedPulling="2025-07-11 00:16:47.348453496 +0000 UTC m=+46.606738519" lastFinishedPulling="2025-07-11 00:16:52.577092398 +0000 UTC m=+51.835377421" observedRunningTime="2025-07-11 00:16:53.140042139 +0000 UTC m=+52.398327162" watchObservedRunningTime="2025-07-11 00:16:53.141970258 +0000 UTC m=+52.400255241" Jul 11 00:16:53.769351 containerd[1445]: time="2025-07-11T00:16:53.769302381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:53.770091 containerd[1445]: time="2025-07-11T00:16:53.770055179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 11 00:16:53.771240 containerd[1445]: time="2025-07-11T00:16:53.771037960Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:53.774814 containerd[1445]: time="2025-07-11T00:16:53.774731221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:53.776891 containerd[1445]: time="2025-07-11T00:16:53.776643738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.198739855s" Jul 11 00:16:53.776891 containerd[1445]: time="2025-07-11T00:16:53.776679462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 11 00:16:53.781093 containerd[1445]: time="2025-07-11T00:16:53.781016148Z" level=info msg="CreateContainer within sandbox \"b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:16:53.803194 containerd[1445]: time="2025-07-11T00:16:53.803055260Z" level=info msg="CreateContainer within sandbox \"b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ebdfffdaa76647c2a74176d99367d6c78577f80455bb5e47dc0b1e3ce5acfd92\"" Jul 11 00:16:53.805580 containerd[1445]: time="2025-07-11T00:16:53.805415143Z" level=info msg="StartContainer for \"ebdfffdaa76647c2a74176d99367d6c78577f80455bb5e47dc0b1e3ce5acfd92\"" Jul 11 00:16:53.838042 systemd[1]: Started cri-containerd-ebdfffdaa76647c2a74176d99367d6c78577f80455bb5e47dc0b1e3ce5acfd92.scope - libcontainer container ebdfffdaa76647c2a74176d99367d6c78577f80455bb5e47dc0b1e3ce5acfd92. Jul 11 00:16:53.935816 containerd[1445]: time="2025-07-11T00:16:53.935772495Z" level=info msg="StartContainer for \"ebdfffdaa76647c2a74176d99367d6c78577f80455bb5e47dc0b1e3ce5acfd92\" returns successfully" Jul 11 00:16:54.153971 kubelet[2469]: I0711 00:16:54.153445 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v292t" podStartSLOduration=26.075974888 podStartE2EDuration="33.153426143s" podCreationTimestamp="2025-07-11 00:16:21 +0000 UTC" firstStartedPulling="2025-07-11 00:16:46.700069213 +0000 UTC m=+45.958354236" lastFinishedPulling="2025-07-11 00:16:53.777520468 +0000 UTC m=+53.035805491" observedRunningTime="2025-07-11 00:16:54.152521451 +0000 UTC m=+53.410806434" watchObservedRunningTime="2025-07-11 00:16:54.153426143 +0000 UTC m=+53.411711166" Jul 11 00:16:54.447028 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:35396.service - OpenSSH per-connection server daemon (10.0.0.1:35396). Jul 11 00:16:54.502763 sshd[5903]: Accepted publickey for core from 10.0.0.1 port 35396 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:54.504576 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:54.508266 systemd-logind[1421]: New session 10 of user core. Jul 11 00:16:54.521091 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:16:54.935576 kubelet[2469]: I0711 00:16:54.935478 2469 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:16:54.947738 kubelet[2469]: I0711 00:16:54.947709 2469 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:16:55.025069 sshd[5903]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:55.041809 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:35396.service: Deactivated successfully. Jul 11 00:16:55.043639 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:16:55.045506 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:16:55.047353 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:35412.service - OpenSSH per-connection server daemon (10.0.0.1:35412). Jul 11 00:16:55.048243 systemd-logind[1421]: Removed session 10. Jul 11 00:16:55.092439 sshd[5924]: Accepted publickey for core from 10.0.0.1 port 35412 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:55.093854 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:55.098006 systemd-logind[1421]: New session 11 of user core. Jul 11 00:16:55.106067 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:16:55.306614 sshd[5924]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:55.318611 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:35412.service: Deactivated successfully. Jul 11 00:16:55.322730 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:16:55.328526 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:16:55.338214 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:35424.service - OpenSSH per-connection server daemon (10.0.0.1:35424). Jul 11 00:16:55.339712 systemd-logind[1421]: Removed session 11. Jul 11 00:16:55.369184 sshd[5937]: Accepted publickey for core from 10.0.0.1 port 35424 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:55.370717 sshd[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:55.375111 systemd-logind[1421]: New session 12 of user core. Jul 11 00:16:55.387037 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:16:55.511190 sshd[5937]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:55.514954 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:35424.service: Deactivated successfully. Jul 11 00:16:55.517481 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:16:55.520095 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:16:55.520891 systemd-logind[1421]: Removed session 12. Jul 11 00:17:00.522710 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:35434.service - OpenSSH per-connection server daemon (10.0.0.1:35434). Jul 11 00:17:00.556935 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 35434 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:17:00.558011 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:00.562195 systemd-logind[1421]: New session 13 of user core. Jul 11 00:17:00.577078 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:17:00.716548 sshd[5959]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:00.727607 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:35434.service: Deactivated successfully. Jul 11 00:17:00.730499 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:17:00.732089 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:17:00.739987 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:35450.service - OpenSSH per-connection server daemon (10.0.0.1:35450). Jul 11 00:17:00.741195 systemd-logind[1421]: Removed session 13. Jul 11 00:17:00.777824 sshd[5973]: Accepted publickey for core from 10.0.0.1 port 35450 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:17:00.779728 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:00.783953 systemd-logind[1421]: New session 14 of user core. Jul 11 00:17:00.791052 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:17:00.817151 containerd[1445]: time="2025-07-11T00:17:00.817102942Z" level=info msg="StopPodSandbox for \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\"" Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.885 [WARNING][5986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v292t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fab7f82-393a-41e4-a999-9430044f6a22", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18", Pod:"csi-node-driver-v292t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab2ece37036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.885 [INFO][5986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.885 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" iface="eth0" netns="" Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.885 [INFO][5986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.885 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.911 [INFO][6002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.912 [INFO][6002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.912 [INFO][6002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.924 [WARNING][6002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.924 [INFO][6002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.925 [INFO][6002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:00.936556 containerd[1445]: 2025-07-11 00:17:00.933 [INFO][5986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:17:00.937658 containerd[1445]: time="2025-07-11T00:17:00.936603189Z" level=info msg="TearDown network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\" successfully" Jul 11 00:17:00.937658 containerd[1445]: time="2025-07-11T00:17:00.936629831Z" level=info msg="StopPodSandbox for \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\" returns successfully" Jul 11 00:17:00.987382 containerd[1445]: time="2025-07-11T00:17:00.987317670Z" level=info msg="RemovePodSandbox for \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\"" Jul 11 00:17:00.997477 containerd[1445]: time="2025-07-11T00:17:00.997384608Z" level=info msg="Forcibly stopping sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\"" Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.037 [WARNING][6021] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v292t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fab7f82-393a-41e4-a999-9430044f6a22", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b71956d1324b6d3cedcafa69d96cb3162005e87b8153900ec37b52ee3ad51b18", Pod:"csi-node-driver-v292t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab2ece37036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.037 [INFO][6021] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.037 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" iface="eth0" netns="" Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.037 [INFO][6021] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.037 [INFO][6021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.061 [INFO][6029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.062 [INFO][6029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.062 [INFO][6029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.073 [WARNING][6029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.073 [INFO][6029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" HandleID="k8s-pod-network.a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Workload="localhost-k8s-csi--node--driver--v292t-eth0" Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.075 [INFO][6029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.079539 containerd[1445]: 2025-07-11 00:17:01.077 [INFO][6021] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266" Jul 11 00:17:01.079539 containerd[1445]: time="2025-07-11T00:17:01.078959158Z" level=info msg="TearDown network for sandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\" successfully" Jul 11 00:17:01.110114 sshd[5973]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:01.111503 containerd[1445]: time="2025-07-11T00:17:01.111240649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:01.111503 containerd[1445]: time="2025-07-11T00:17:01.111344978Z" level=info msg="RemovePodSandbox \"a56724db702b31c9fcf8ac43a5e47fdcc618cac394e6b4619a6c1d5ad54d8266\" returns successfully" Jul 11 00:17:01.113355 containerd[1445]: time="2025-07-11T00:17:01.112965847Z" level=info msg="StopPodSandbox for \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\"" Jul 11 00:17:01.121257 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:35450.service: Deactivated successfully. Jul 11 00:17:01.124311 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:17:01.125187 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:17:01.136225 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:35454.service - OpenSSH per-connection server daemon (10.0.0.1:35454). Jul 11 00:17:01.138559 systemd-logind[1421]: Removed session 14. Jul 11 00:17:01.177212 sshd[6054]: Accepted publickey for core from 10.0.0.1 port 35454 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:17:01.178891 sshd[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:01.183579 systemd-logind[1421]: New session 15 of user core. Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.156 [WARNING][6047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"58c7686e-2053-4b0c-9e02-052b8ed1eb7b", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e", Pod:"goldmane-768f4c5c69-lf8r5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f234a36eb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.156 [INFO][6047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.156 [INFO][6047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" iface="eth0" netns="" Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.156 [INFO][6047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.156 [INFO][6047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.176 [INFO][6060] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.176 [INFO][6060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.176 [INFO][6060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.186 [WARNING][6060] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.186 [INFO][6060] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.187 [INFO][6060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.191576 containerd[1445]: 2025-07-11 00:17:01.189 [INFO][6047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:17:01.192130 containerd[1445]: time="2025-07-11T00:17:01.191607484Z" level=info msg="TearDown network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\" successfully" Jul 11 00:17:01.192130 containerd[1445]: time="2025-07-11T00:17:01.191633966Z" level=info msg="StopPodSandbox for \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\" returns successfully" Jul 11 00:17:01.192571 containerd[1445]: time="2025-07-11T00:17:01.192371834Z" level=info msg="RemovePodSandbox for \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\"" Jul 11 00:17:01.192571 containerd[1445]: time="2025-07-11T00:17:01.192404957Z" level=info msg="Forcibly stopping sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\"" Jul 11 00:17:01.195117 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.225 [WARNING][6077] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"58c7686e-2053-4b0c-9e02-052b8ed1eb7b", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d40917f7675cc940346a02dbf9fe3b5adf8fdcc9a1c070f62960a4adf103b2e", Pod:"goldmane-768f4c5c69-lf8r5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f234a36eb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.225 [INFO][6077] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.225 [INFO][6077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" iface="eth0" netns="" Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.225 [INFO][6077] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.225 [INFO][6077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.244 [INFO][6087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.244 [INFO][6087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.244 [INFO][6087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.255 [WARNING][6087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.255 [INFO][6087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" HandleID="k8s-pod-network.8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Workload="localhost-k8s-goldmane--768f4c5c69--lf8r5-eth0" Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.257 [INFO][6087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.263533 containerd[1445]: 2025-07-11 00:17:01.259 [INFO][6077] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26" Jul 11 00:17:01.263533 containerd[1445]: time="2025-07-11T00:17:01.262862801Z" level=info msg="TearDown network for sandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\" successfully" Jul 11 00:17:01.266805 containerd[1445]: time="2025-07-11T00:17:01.266762800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:01.267967 containerd[1445]: time="2025-07-11T00:17:01.267935027Z" level=info msg="RemovePodSandbox \"8f771114bc9add271fb418647e260a44734472a88adc8c091b0e0b36178a7c26\" returns successfully" Jul 11 00:17:01.268698 containerd[1445]: time="2025-07-11T00:17:01.268620171Z" level=info msg="StopPodSandbox for \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\"" Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.309 [WARNING][6112] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0", GenerateName:"calico-kube-controllers-5f5d7d7856-", Namespace:"calico-system", SelfLink:"", UID:"a38583e9-7b43-4c77-8995-9dc39bf3123b", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5d7d7856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4", Pod:"calico-kube-controllers-5f5d7d7856-pzcbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a8f6ea5fa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.309 [INFO][6112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.309 [INFO][6112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" iface="eth0" netns="" Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.309 [INFO][6112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.309 [INFO][6112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.330 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.330 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.330 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.342 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.342 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.343 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.347299 containerd[1445]: 2025-07-11 00:17:01.345 [INFO][6112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:17:01.347299 containerd[1445]: time="2025-07-11T00:17:01.347113513Z" level=info msg="TearDown network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\" successfully" Jul 11 00:17:01.347299 containerd[1445]: time="2025-07-11T00:17:01.347136236Z" level=info msg="StopPodSandbox for \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\" returns successfully" Jul 11 00:17:01.347699 containerd[1445]: time="2025-07-11T00:17:01.347609319Z" level=info msg="RemovePodSandbox for \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\"" Jul 11 00:17:01.347699 containerd[1445]: time="2025-07-11T00:17:01.347641482Z" level=info msg="Forcibly stopping sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\"" Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.386 [WARNING][6139] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0", GenerateName:"calico-kube-controllers-5f5d7d7856-", Namespace:"calico-system", SelfLink:"", UID:"a38583e9-7b43-4c77-8995-9dc39bf3123b", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5d7d7856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a22b2305b2257e8070b6d6945899b44bb2baa8af101d79a62ac17128c2a03b4", Pod:"calico-kube-controllers-5f5d7d7856-pzcbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a8f6ea5fa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.386 [INFO][6139] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.386 [INFO][6139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" iface="eth0" netns="" Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.387 [INFO][6139] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.387 [INFO][6139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.411 [INFO][6147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.411 [INFO][6147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.411 [INFO][6147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.423 [WARNING][6147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.423 [INFO][6147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" HandleID="k8s-pod-network.f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Workload="localhost-k8s-calico--kube--controllers--5f5d7d7856--pzcbk-eth0" Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.425 [INFO][6147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.428927 containerd[1445]: 2025-07-11 00:17:01.427 [INFO][6139] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452" Jul 11 00:17:01.428927 containerd[1445]: time="2025-07-11T00:17:01.428735904Z" level=info msg="TearDown network for sandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\" successfully" Jul 11 00:17:01.431987 containerd[1445]: time="2025-07-11T00:17:01.431941959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:01.432062 containerd[1445]: time="2025-07-11T00:17:01.432011606Z" level=info msg="RemovePodSandbox \"f8869e6ca30ad3f48825a060181a47e40ce353ee3dc35f3b44dda3886d63d452\" returns successfully" Jul 11 00:17:01.432479 containerd[1445]: time="2025-07-11T00:17:01.432452366Z" level=info msg="StopPodSandbox for \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\"" Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.470 [WARNING][6171] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"52892df7-6e82-4fd1-8c85-d93129166596", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6", Pod:"coredns-674b8bbfcf-mmnh8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7160a22bbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.471 [INFO][6171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.471 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" iface="eth0" netns="" Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.471 [INFO][6171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.471 [INFO][6171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.490 [INFO][6183] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.490 [INFO][6183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.490 [INFO][6183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.500 [WARNING][6183] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.500 [INFO][6183] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.501 [INFO][6183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.505398 containerd[1445]: 2025-07-11 00:17:01.503 [INFO][6171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:17:01.505398 containerd[1445]: time="2025-07-11T00:17:01.505265387Z" level=info msg="TearDown network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\" successfully" Jul 11 00:17:01.505398 containerd[1445]: time="2025-07-11T00:17:01.505288349Z" level=info msg="StopPodSandbox for \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\" returns successfully" Jul 11 00:17:01.505833 containerd[1445]: time="2025-07-11T00:17:01.505733950Z" level=info msg="RemovePodSandbox for \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\"" Jul 11 00:17:01.505833 containerd[1445]: time="2025-07-11T00:17:01.505773233Z" level=info msg="Forcibly stopping sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\"" Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.561 [WARNING][6201] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"52892df7-6e82-4fd1-8c85-d93129166596", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5d721f544751d8f6c795fa575f3eca9d0eebbc667d1a96ba314295af95332b6", Pod:"coredns-674b8bbfcf-mmnh8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7160a22bbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.562 [INFO][6201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.562 [INFO][6201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" iface="eth0" netns="" Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.562 [INFO][6201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.562 [INFO][6201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.586 [INFO][6212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.587 [INFO][6212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.587 [INFO][6212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.602 [WARNING][6212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.602 [INFO][6212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" HandleID="k8s-pod-network.8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Workload="localhost-k8s-coredns--674b8bbfcf--mmnh8-eth0" Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.605 [INFO][6212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.614976 containerd[1445]: 2025-07-11 00:17:01.611 [INFO][6201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e" Jul 11 00:17:01.616606 containerd[1445]: time="2025-07-11T00:17:01.615727511Z" level=info msg="TearDown network for sandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\" successfully" Jul 11 00:17:01.625176 containerd[1445]: time="2025-07-11T00:17:01.625122096Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:01.625416 containerd[1445]: time="2025-07-11T00:17:01.625199503Z" level=info msg="RemovePodSandbox \"8a8a4df21c998b2cc93cc78c073b761cc30a17b922cbfc90bc20214ed601ed1e\" returns successfully" Jul 11 00:17:01.626093 containerd[1445]: time="2025-07-11T00:17:01.625677027Z" level=info msg="StopPodSandbox for \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\"" Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.668 [WARNING][6234] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e396f44e-1f83-4d6d-a81a-6662c419d2df", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a", Pod:"calico-apiserver-6f9544685f-msjqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2aed5f2c366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.669 [INFO][6234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.669 [INFO][6234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" iface="eth0" netns="" Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.669 [INFO][6234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.669 [INFO][6234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.707 [INFO][6243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.707 [INFO][6243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.708 [INFO][6243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.719 [WARNING][6243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.719 [INFO][6243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.721 [INFO][6243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.726602 containerd[1445]: 2025-07-11 00:17:01.723 [INFO][6234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:17:01.728599 containerd[1445]: time="2025-07-11T00:17:01.726645638Z" level=info msg="TearDown network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\" successfully" Jul 11 00:17:01.728599 containerd[1445]: time="2025-07-11T00:17:01.726673520Z" level=info msg="StopPodSandbox for \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\" returns successfully" Jul 11 00:17:01.728599 containerd[1445]: time="2025-07-11T00:17:01.727349703Z" level=info msg="RemovePodSandbox for \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\"" Jul 11 00:17:01.728599 containerd[1445]: time="2025-07-11T00:17:01.727377585Z" level=info msg="Forcibly stopping sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\"" Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.770 [WARNING][6261] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e396f44e-1f83-4d6d-a81a-6662c419d2df", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a", Pod:"calico-apiserver-6f9544685f-msjqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2aed5f2c366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.770 [INFO][6261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.770 [INFO][6261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" iface="eth0" netns="" Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.770 [INFO][6261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.770 [INFO][6261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.790 [INFO][6269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.791 [INFO][6269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.791 [INFO][6269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.800 [WARNING][6269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.800 [INFO][6269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" HandleID="k8s-pod-network.224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.802 [INFO][6269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.808650 containerd[1445]: 2025-07-11 00:17:01.806 [INFO][6261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed" Jul 11 00:17:01.809142 containerd[1445]: time="2025-07-11T00:17:01.808688027Z" level=info msg="TearDown network for sandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\" successfully" Jul 11 00:17:01.811712 containerd[1445]: time="2025-07-11T00:17:01.811630698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:01.811787 containerd[1445]: time="2025-07-11T00:17:01.811761590Z" level=info msg="RemovePodSandbox \"224ef194ee982bc7d2d6a1ec50db733f6b64f3c5e61a8918c35ba250b75407ed\" returns successfully" Jul 11 00:17:01.812292 containerd[1445]: time="2025-07-11T00:17:01.812258396Z" level=info msg="StopPodSandbox for \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\"" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.853 [WARNING][6287] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" WorkloadEndpoint="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.853 [INFO][6287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.853 [INFO][6287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" iface="eth0" netns="" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.853 [INFO][6287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.853 [INFO][6287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.876 [INFO][6296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.876 [INFO][6296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.876 [INFO][6296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.891 [WARNING][6296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.891 [INFO][6296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.893 [INFO][6296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:01.897331 containerd[1445]: 2025-07-11 00:17:01.895 [INFO][6287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:17:01.897939 containerd[1445]: time="2025-07-11T00:17:01.897332024Z" level=info msg="TearDown network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\" successfully" Jul 11 00:17:01.897939 containerd[1445]: time="2025-07-11T00:17:01.897359507Z" level=info msg="StopPodSandbox for \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\" returns successfully" Jul 11 00:17:01.900018 containerd[1445]: time="2025-07-11T00:17:01.898321955Z" level=info msg="RemovePodSandbox for \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\"" Jul 11 00:17:01.900018 containerd[1445]: time="2025-07-11T00:17:01.898354278Z" level=info msg="Forcibly stopping sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\"" Jul 11 00:17:01.913324 kubelet[2469]: I0711 00:17:01.913277 2469 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:17:01.942724 sshd[6054]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:01.950735 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:35454.service: Deactivated successfully. Jul 11 00:17:01.953217 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:17:01.960812 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:17:01.977417 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:35456.service - OpenSSH per-connection server daemon (10.0.0.1:35456). Jul 11 00:17:01.979284 systemd-logind[1421]: Removed session 15. Jul 11 00:17:01.986704 kubelet[2469]: I0711 00:17:01.986425 2469 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:17:01.994062 containerd[1445]: time="2025-07-11T00:17:01.994016321Z" level=info msg="StopContainer for \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\" with timeout 30 (s)" Jul 11 00:17:01.994431 containerd[1445]: time="2025-07-11T00:17:01.994410517Z" level=info msg="Stop container \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\" with signal terminated" Jul 11 00:17:02.029447 sshd[6326]: Accepted publickey for core from 10.0.0.1 port 35456 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:17:02.031244 systemd[1]: cri-containerd-ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4.scope: Deactivated successfully. Jul 11 00:17:02.031659 systemd[1]: cri-containerd-ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4.scope: Consumed 1.621s CPU time. Jul 11 00:17:02.036767 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:02.047593 systemd-logind[1421]: New session 16 of user core. Jul 11 00:17:02.059156 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:17:02.088410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4-rootfs.mount: Deactivated successfully. Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.030 [WARNING][6313] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" WorkloadEndpoint="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.032 [INFO][6313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.032 [INFO][6313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" iface="eth0" netns="" Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.035 [INFO][6313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.035 [INFO][6313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.077 [INFO][6337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.077 [INFO][6337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.077 [INFO][6337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.089 [WARNING][6337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.089 [INFO][6337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" HandleID="k8s-pod-network.91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Workload="localhost-k8s-whisker--7d7596b9b4--8vr76-eth0" Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.091 [INFO][6337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.095969 containerd[1445]: 2025-07-11 00:17:02.093 [INFO][6313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd" Jul 11 00:17:02.109596 containerd[1445]: time="2025-07-11T00:17:02.085929652Z" level=info msg="shim disconnected" id=ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4 namespace=k8s.io Jul 11 00:17:02.109703 containerd[1445]: time="2025-07-11T00:17:02.109604207Z" level=warning msg="cleaning up after shim disconnected" id=ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4 namespace=k8s.io Jul 11 00:17:02.109703 containerd[1445]: time="2025-07-11T00:17:02.109622128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:17:02.109703 containerd[1445]: time="2025-07-11T00:17:02.096010250Z" level=info msg="TearDown network for sandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\" successfully" Jul 11 00:17:02.113326 containerd[1445]: time="2025-07-11T00:17:02.113267700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:02.113394 containerd[1445]: time="2025-07-11T00:17:02.113380750Z" level=info msg="RemovePodSandbox \"91d9126420c4884f998f4f630d214e304166eef91c957e3f360979b30edba0bd\" returns successfully" Jul 11 00:17:02.114188 containerd[1445]: time="2025-07-11T00:17:02.113901478Z" level=info msg="StopPodSandbox for \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\"" Jul 11 00:17:02.138683 containerd[1445]: time="2025-07-11T00:17:02.138637249Z" level=info msg="StopContainer for \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\" returns successfully" Jul 11 00:17:02.166042 containerd[1445]: time="2025-07-11T00:17:02.165900570Z" level=info msg="StopPodSandbox for \"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602\"" Jul 11 00:17:02.166042 containerd[1445]: time="2025-07-11T00:17:02.165979737Z" level=info msg="Container to stop \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:17:02.170418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602-shm.mount: Deactivated successfully. Jul 11 00:17:02.179782 systemd[1]: cri-containerd-dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602.scope: Deactivated successfully. Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.152 [WARNING][6380] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0", GenerateName:"calico-apiserver-77dc4685dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8e35104-a845-48f9-8ad5-cc498d1edd3f", ResourceVersion:"1248", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77dc4685dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e", Pod:"calico-apiserver-77dc4685dc-gc67k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali517df191baf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.153 [INFO][6380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.153 [INFO][6380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" iface="eth0" netns="" Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.153 [INFO][6380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.153 [INFO][6380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.186 [INFO][6393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.186 [INFO][6393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.186 [INFO][6393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.199 [WARNING][6393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.201 [INFO][6393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.204 [INFO][6393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.208790 containerd[1445]: 2025-07-11 00:17:02.206 [INFO][6380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:17:02.208790 containerd[1445]: time="2025-07-11T00:17:02.208716306Z" level=info msg="TearDown network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\" successfully" Jul 11 00:17:02.208790 containerd[1445]: time="2025-07-11T00:17:02.208741988Z" level=info msg="StopPodSandbox for \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\" returns successfully" Jul 11 00:17:02.210143 containerd[1445]: time="2025-07-11T00:17:02.209286558Z" level=info msg="RemovePodSandbox for \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\"" Jul 11 00:17:02.210143 containerd[1445]: time="2025-07-11T00:17:02.209344603Z" level=info msg="Forcibly stopping sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\"" Jul 11 00:17:02.212078 containerd[1445]: time="2025-07-11T00:17:02.211939039Z" level=info msg="shim disconnected" id=dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602 namespace=k8s.io Jul 11 00:17:02.212170 containerd[1445]: time="2025-07-11T00:17:02.212079292Z" level=warning msg="cleaning up after shim disconnected" id=dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602 namespace=k8s.io Jul 11 00:17:02.212170 containerd[1445]: time="2025-07-11T00:17:02.212091013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:17:02.213664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602-rootfs.mount: Deactivated successfully. Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.264 [WARNING][6435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0", GenerateName:"calico-apiserver-77dc4685dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8e35104-a845-48f9-8ad5-cc498d1edd3f", ResourceVersion:"1248", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77dc4685dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d62f39bfa10cb5d862d538fded60febbc94a69daf301c336dc4fba1c63c00b9e", Pod:"calico-apiserver-77dc4685dc-gc67k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali517df191baf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.264 [INFO][6435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.264 [INFO][6435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" iface="eth0" netns="" Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.264 [INFO][6435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.264 [INFO][6435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.296 [INFO][6468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.296 [INFO][6468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.296 [INFO][6468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.306 [WARNING][6468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.306 [INFO][6468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" HandleID="k8s-pod-network.af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Workload="localhost-k8s-calico--apiserver--77dc4685dc--gc67k-eth0" Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.308 [INFO][6468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.316391 containerd[1445]: 2025-07-11 00:17:02.310 [INFO][6435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828" Jul 11 00:17:02.316941 containerd[1445]: time="2025-07-11T00:17:02.316438068Z" level=info msg="TearDown network for sandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\" successfully" Jul 11 00:17:02.320702 systemd-networkd[1376]: cali08346c5afa5: Link DOWN Jul 11 00:17:02.320709 systemd-networkd[1376]: cali08346c5afa5: Lost carrier Jul 11 00:17:02.331921 containerd[1445]: time="2025-07-11T00:17:02.331273818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:02.331921 containerd[1445]: time="2025-07-11T00:17:02.331362627Z" level=info msg="RemovePodSandbox \"af283d06d22b49663dc6b6ba4ab986e9e4739f0ec63417f87960da6d81a7c828\" returns successfully" Jul 11 00:17:02.342138 containerd[1445]: time="2025-07-11T00:17:02.341925348Z" level=info msg="StopPodSandbox for \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\"" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.316 [INFO][6460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.316 [INFO][6460] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" iface="eth0" netns="/var/run/netns/cni-14d142f6-d33b-4548-d71b-b1f4936b0048" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.316 [INFO][6460] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" iface="eth0" netns="/var/run/netns/cni-14d142f6-d33b-4548-d71b-b1f4936b0048" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.335 [INFO][6460] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" after=19.1175ms iface="eth0" netns="/var/run/netns/cni-14d142f6-d33b-4548-d71b-b1f4936b0048" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.336 [INFO][6460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.336 [INFO][6460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.375 [INFO][6485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" HandleID="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.376 [INFO][6485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.376 [INFO][6485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.490 [INFO][6485] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" HandleID="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.490 [INFO][6485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" HandleID="k8s-pod-network.dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.494 [INFO][6485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.500930 containerd[1445]: 2025-07-11 00:17:02.496 [INFO][6460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602" Jul 11 00:17:02.504525 containerd[1445]: time="2025-07-11T00:17:02.501991714Z" level=info msg="TearDown network for sandbox \"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602\" successfully" Jul 11 00:17:02.504525 containerd[1445]: time="2025-07-11T00:17:02.502024197Z" level=info msg="StopPodSandbox for \"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602\" returns successfully" Jul 11 00:17:02.506246 containerd[1445]: time="2025-07-11T00:17:02.506127090Z" level=info msg="StopPodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\"" Jul 11 00:17:02.508067 systemd[1]: run-netns-cni\x2d14d142f6\x2dd33b\x2d4548\x2dd71b\x2db1f4936b0048.mount: Deactivated successfully. Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.398 [WARNING][6501] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67c0bfe2-c5c5-48ac-a593-483d9d147ed4", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be", Pod:"coredns-674b8bbfcf-fxqzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66672ab4cd0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.398 [INFO][6501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.398 [INFO][6501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" iface="eth0" netns="" Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.398 [INFO][6501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.398 [INFO][6501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.436 [INFO][6512] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.436 [INFO][6512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.494 [INFO][6512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.511 [WARNING][6512] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.512 [INFO][6512] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.514 [INFO][6512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.524199 containerd[1445]: 2025-07-11 00:17:02.520 [INFO][6501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:17:02.524199 containerd[1445]: time="2025-07-11T00:17:02.524247579Z" level=info msg="TearDown network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\" successfully" Jul 11 00:17:02.524789 containerd[1445]: time="2025-07-11T00:17:02.524271901Z" level=info msg="StopPodSandbox for \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\" returns successfully" Jul 11 00:17:02.525710 containerd[1445]: time="2025-07-11T00:17:02.525664628Z" level=info msg="RemovePodSandbox for \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\"" Jul 11 00:17:02.525710 containerd[1445]: time="2025-07-11T00:17:02.525703752Z" level=info msg="Forcibly stopping sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\"" Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.558 [WARNING][6530] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602", Pod:"calico-apiserver-6f9544685f-czxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08346c5afa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.559 [INFO][6530] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.559 [INFO][6530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" iface="eth0" netns="" Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.559 [INFO][6530] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.559 [INFO][6530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.588 [INFO][6554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.588 [INFO][6554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.589 [INFO][6554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.599 [WARNING][6554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.599 [INFO][6554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.601 [INFO][6554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.612930 containerd[1445]: 2025-07-11 00:17:02.603 [INFO][6530] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.612930 containerd[1445]: time="2025-07-11T00:17:02.611065159Z" level=info msg="TearDown network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" successfully" Jul 11 00:17:02.612930 containerd[1445]: time="2025-07-11T00:17:02.611090562Z" level=info msg="StopPodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" returns successfully" Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.585 [WARNING][6546] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67c0bfe2-c5c5-48ac-a593-483d9d147ed4", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"064bfd8ee230b588f9ecea793d558f73cd7a742a1b5c34b35456a016566c33be", Pod:"coredns-674b8bbfcf-fxqzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66672ab4cd0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.586 [INFO][6546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.586 [INFO][6546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" iface="eth0" netns="" Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.586 [INFO][6546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.586 [INFO][6546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.608 [INFO][6564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.609 [INFO][6564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.609 [INFO][6564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.619 [WARNING][6564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.620 [INFO][6564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" HandleID="k8s-pod-network.7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Workload="localhost-k8s-coredns--674b8bbfcf--fxqzj-eth0" Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.623 [INFO][6564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.633347 containerd[1445]: 2025-07-11 00:17:02.630 [INFO][6546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df" Jul 11 00:17:02.633749 containerd[1445]: time="2025-07-11T00:17:02.633386871Z" level=info msg="TearDown network for sandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\" successfully" Jul 11 00:17:02.646686 containerd[1445]: time="2025-07-11T00:17:02.646492223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:02.646686 containerd[1445]: time="2025-07-11T00:17:02.646580551Z" level=info msg="RemovePodSandbox \"7988121ae3f6d45243f27655e26f1409398600e23d0fb1f41111aa10054cb6df\" returns successfully" Jul 11 00:17:02.649069 containerd[1445]: time="2025-07-11T00:17:02.649031814Z" level=info msg="StopPodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\"" Jul 11 00:17:02.687341 sshd[6326]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:02.699267 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:35456.service: Deactivated successfully. Jul 11 00:17:02.707205 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:17:02.710093 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:17:02.718201 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:38226.service - OpenSSH per-connection server daemon (10.0.0.1:38226). Jul 11 00:17:02.720410 systemd-logind[1421]: Removed session 16. Jul 11 00:17:02.730091 kubelet[2469]: I0711 00:17:02.730049 2469 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc-calico-apiserver-certs\") pod \"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc\" (UID: \"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc\") " Jul 11 00:17:02.730218 kubelet[2469]: I0711 00:17:02.730100 2469 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24vjh\" (UniqueName: \"kubernetes.io/projected/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc-kube-api-access-24vjh\") pod \"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc\" (UID: \"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc\") " Jul 11 00:17:02.734850 systemd[1]: var-lib-kubelet-pods-f6ab768e\x2dfd1a\x2d4783\x2dbee2\x2de85ef4dae0dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24vjh.mount: Deactivated successfully. Jul 11 00:17:02.735999 systemd[1]: var-lib-kubelet-pods-f6ab768e\x2dfd1a\x2d4783\x2dbee2\x2de85ef4dae0dc-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 11 00:17:02.736143 kubelet[2469]: I0711 00:17:02.736103 2469 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc-kube-api-access-24vjh" (OuterVolumeSpecName: "kube-api-access-24vjh") pod "f6ab768e-fd1a-4783-bee2-e85ef4dae0dc" (UID: "f6ab768e-fd1a-4783-bee2-e85ef4dae0dc"). InnerVolumeSpecName "kube-api-access-24vjh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:17:02.736336 kubelet[2469]: I0711 00:17:02.736291 2469 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f6ab768e-fd1a-4783-bee2-e85ef4dae0dc" (UID: "f6ab768e-fd1a-4783-bee2-e85ef4dae0dc"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.710 [WARNING][6583] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc", ResourceVersion:"1297", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602", Pod:"calico-apiserver-6f9544685f-czxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08346c5afa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.710 [INFO][6583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.710 [INFO][6583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" iface="eth0" netns="" Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.710 [INFO][6583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.710 [INFO][6583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.731 [INFO][6594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.731 [INFO][6594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.731 [INFO][6594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.746 [WARNING][6594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.746 [INFO][6594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.750 [INFO][6594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.754047 containerd[1445]: 2025-07-11 00:17:02.751 [INFO][6583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.754424 containerd[1445]: time="2025-07-11T00:17:02.754087334Z" level=info msg="TearDown network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" successfully" Jul 11 00:17:02.754424 containerd[1445]: time="2025-07-11T00:17:02.754113337Z" level=info msg="StopPodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" returns successfully" Jul 11 00:17:02.755879 containerd[1445]: time="2025-07-11T00:17:02.755839774Z" level=info msg="RemovePodSandbox for \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\"" Jul 11 00:17:02.755958 containerd[1445]: time="2025-07-11T00:17:02.755888738Z" level=info msg="Forcibly stopping sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\"" Jul 11 00:17:02.758916 sshd[6595]: Accepted publickey for core from 10.0.0.1 port 38226 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:17:02.760060 sshd[6595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:02.765961 systemd-logind[1421]: New session 17 of user core. Jul 11 00:17:02.770061 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:17:02.830468 kubelet[2469]: I0711 00:17:02.830421 2469 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-24vjh\" (UniqueName: \"kubernetes.io/projected/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc-kube-api-access-24vjh\") on node \"localhost\" DevicePath \"\"" Jul 11 00:17:02.830468 kubelet[2469]: I0711 00:17:02.830460 2469 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.792 [WARNING][6616] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0", GenerateName:"calico-apiserver-6f9544685f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6ab768e-fd1a-4783-bee2-e85ef4dae0dc", ResourceVersion:"1297", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9544685f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd1915ecaad79db4eac28ef893ca236f63de3c298db079cb126e05a2859e2602", Pod:"calico-apiserver-6f9544685f-czxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08346c5afa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.793 [INFO][6616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.793 [INFO][6616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" iface="eth0" netns="" Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.793 [INFO][6616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.793 [INFO][6616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.813 [INFO][6625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.813 [INFO][6625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.813 [INFO][6625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.823 [WARNING][6625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.823 [INFO][6625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" HandleID="k8s-pod-network.4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Workload="localhost-k8s-calico--apiserver--6f9544685f--czxjh-eth0" Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.826 [INFO][6625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:02.831402 containerd[1445]: 2025-07-11 00:17:02.828 [INFO][6616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847" Jul 11 00:17:02.831861 containerd[1445]: time="2025-07-11T00:17:02.831449654Z" level=info msg="TearDown network for sandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" successfully" Jul 11 00:17:02.834629 containerd[1445]: time="2025-07-11T00:17:02.834585420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:17:02.834737 containerd[1445]: time="2025-07-11T00:17:02.834653746Z" level=info msg="RemovePodSandbox \"4384e70c15d71930f959c1374faa9b7fe523acf005aae17725d857c658a5b847\" returns successfully" Jul 11 00:17:02.846834 systemd[1]: Removed slice kubepods-besteffort-podf6ab768e_fd1a_4783_bee2_e85ef4dae0dc.slice - libcontainer container kubepods-besteffort-podf6ab768e_fd1a_4783_bee2_e85ef4dae0dc.slice. Jul 11 00:17:02.846951 systemd[1]: kubepods-besteffort-podf6ab768e_fd1a_4783_bee2_e85ef4dae0dc.slice: Consumed 1.640s CPU time. Jul 11 00:17:02.914156 sshd[6595]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:02.917754 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:38226.service: Deactivated successfully. Jul 11 00:17:02.919637 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:17:02.920411 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:17:02.921313 systemd-logind[1421]: Removed session 17. Jul 11 00:17:03.200840 kubelet[2469]: I0711 00:17:03.200479 2469 scope.go:117] "RemoveContainer" containerID="ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4" Jul 11 00:17:03.203909 containerd[1445]: time="2025-07-11T00:17:03.203467515Z" level=info msg="RemoveContainer for \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\"" Jul 11 00:17:03.208647 containerd[1445]: time="2025-07-11T00:17:03.208575855Z" level=info msg="RemoveContainer for \"ac083efcbf1eb38197ac9434fd33eb27b8a2a3aa5456b2e661c821d1c0f957a4\" returns successfully" Jul 11 00:17:04.841386 kubelet[2469]: I0711 00:17:04.841344 2469 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ab768e-fd1a-4783-bee2-e85ef4dae0dc" path="/var/lib/kubelet/pods/f6ab768e-fd1a-4783-bee2-e85ef4dae0dc/volumes" Jul 11 00:17:07.927478 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:38236.service - OpenSSH per-connection server daemon (10.0.0.1:38236). Jul 11 00:17:07.967550 sshd[6651]: Accepted publickey for core from 10.0.0.1 port 38236 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:17:07.968443 sshd[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:07.972966 systemd-logind[1421]: New session 18 of user core. Jul 11 00:17:07.983190 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:17:08.122808 sshd[6651]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:08.127100 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:38236.service: Deactivated successfully. Jul 11 00:17:08.129568 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:17:08.130437 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:17:08.133640 systemd-logind[1421]: Removed session 18. Jul 11 00:17:09.894785 containerd[1445]: time="2025-07-11T00:17:09.894638311Z" level=info msg="StopContainer for \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\" with timeout 30 (s)" Jul 11 00:17:09.895808 containerd[1445]: time="2025-07-11T00:17:09.895720883Z" level=info msg="Stop container \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\" with signal terminated" Jul 11 00:17:09.918247 systemd[1]: cri-containerd-419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807.scope: Deactivated successfully. Jul 11 00:17:09.918699 systemd[1]: cri-containerd-419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807.scope: Consumed 1.521s CPU time. Jul 11 00:17:09.939159 containerd[1445]: time="2025-07-11T00:17:09.939094635Z" level=info msg="shim disconnected" id=419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807 namespace=k8s.io Jul 11 00:17:09.939159 containerd[1445]: time="2025-07-11T00:17:09.939155709Z" level=warning msg="cleaning up after shim disconnected" id=419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807 namespace=k8s.io Jul 11 00:17:09.939360 containerd[1445]: time="2025-07-11T00:17:09.939165388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:17:09.941249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807-rootfs.mount: Deactivated successfully. Jul 11 00:17:09.965225 containerd[1445]: time="2025-07-11T00:17:09.965179631Z" level=info msg="StopContainer for \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\" returns successfully" Jul 11 00:17:09.966015 containerd[1445]: time="2025-07-11T00:17:09.965990990Z" level=info msg="StopPodSandbox for \"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a\"" Jul 11 00:17:09.966102 containerd[1445]: time="2025-07-11T00:17:09.966030307Z" level=info msg="Container to stop \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:17:09.968521 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a-shm.mount: Deactivated successfully. Jul 11 00:17:09.974088 systemd[1]: cri-containerd-7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a.scope: Deactivated successfully. Jul 11 00:17:09.996047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a-rootfs.mount: Deactivated successfully. Jul 11 00:17:09.996618 containerd[1445]: time="2025-07-11T00:17:09.996392436Z" level=info msg="shim disconnected" id=7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a namespace=k8s.io Jul 11 00:17:09.996618 containerd[1445]: time="2025-07-11T00:17:09.996549861Z" level=warning msg="cleaning up after shim disconnected" id=7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a namespace=k8s.io Jul 11 00:17:09.996618 containerd[1445]: time="2025-07-11T00:17:09.996560220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:17:10.059060 systemd-networkd[1376]: cali2aed5f2c366: Link DOWN Jul 11 00:17:10.059067 systemd-networkd[1376]: cali2aed5f2c366: Lost carrier Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.058 [INFO][6765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.058 [INFO][6765] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" iface="eth0" netns="/var/run/netns/cni-f2374417-78ce-f6b3-d802-204b3877b781" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.058 [INFO][6765] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" iface="eth0" netns="/var/run/netns/cni-f2374417-78ce-f6b3-d802-204b3877b781" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.072 [INFO][6765] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" after=14.198372ms iface="eth0" netns="/var/run/netns/cni-f2374417-78ce-f6b3-d802-204b3877b781" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.072 [INFO][6765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.072 [INFO][6765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.092 [INFO][6780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" HandleID="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.093 [INFO][6780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.093 [INFO][6780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.128 [INFO][6780] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" HandleID="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.128 [INFO][6780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" HandleID="k8s-pod-network.7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Workload="localhost-k8s-calico--apiserver--6f9544685f--msjqm-eth0" Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.135 [INFO][6780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:10.138613 containerd[1445]: 2025-07-11 00:17:10.136 [INFO][6765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a" Jul 11 00:17:10.139341 containerd[1445]: time="2025-07-11T00:17:10.138857406Z" level=info msg="TearDown network for sandbox \"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a\" successfully" Jul 11 00:17:10.139341 containerd[1445]: time="2025-07-11T00:17:10.138895443Z" level=info msg="StopPodSandbox for \"7874135f4314a1da3565c9afae666f8936aa2a67663c6e5ee35660bf8ed2f94a\" returns successfully" Jul 11 00:17:10.141342 systemd[1]: run-netns-cni\x2df2374417\x2d78ce\x2df6b3\x2dd802\x2d204b3877b781.mount: Deactivated successfully. Jul 11 00:17:10.207638 kubelet[2469]: I0711 00:17:10.206544 2469 scope.go:117] "RemoveContainer" containerID="419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807" Jul 11 00:17:10.208559 containerd[1445]: time="2025-07-11T00:17:10.208515194Z" level=info msg="RemoveContainer for \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\"" Jul 11 00:17:10.212784 containerd[1445]: time="2025-07-11T00:17:10.212727794Z" level=info msg="RemoveContainer for \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\" returns successfully" Jul 11 00:17:10.213129 kubelet[2469]: I0711 00:17:10.213095 2469 scope.go:117] "RemoveContainer" containerID="419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807" Jul 11 00:17:10.213559 containerd[1445]: time="2025-07-11T00:17:10.213516400Z" level=error msg="ContainerStatus for \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\": not found" Jul 11 00:17:10.221598 kubelet[2469]: E0711 00:17:10.221550 2469 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\": not found" containerID="419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807" Jul 11 00:17:10.221723 kubelet[2469]: I0711 00:17:10.221600 2469 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807"} err="failed to get container status \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\": rpc error: code = NotFound desc = an error occurred when try to find container \"419bd6b26cd0bd80cc20c00fed5c1e3f050a53e718ff4e53560f28ce43b99807\": not found" Jul 11 00:17:10.283492 kubelet[2469]: I0711 00:17:10.283386 2469 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e396f44e-1f83-4d6d-a81a-6662c419d2df-calico-apiserver-certs\") pod \"e396f44e-1f83-4d6d-a81a-6662c419d2df\" (UID: \"e396f44e-1f83-4d6d-a81a-6662c419d2df\") " Jul 11 00:17:10.283699 kubelet[2469]: I0711 00:17:10.283682 2469 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkjq5\" (UniqueName: \"kubernetes.io/projected/e396f44e-1f83-4d6d-a81a-6662c419d2df-kube-api-access-lkjq5\") pod \"e396f44e-1f83-4d6d-a81a-6662c419d2df\" (UID: \"e396f44e-1f83-4d6d-a81a-6662c419d2df\") " Jul 11 00:17:10.286551 kubelet[2469]: I0711 00:17:10.286481 2469 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e396f44e-1f83-4d6d-a81a-6662c419d2df-kube-api-access-lkjq5" (OuterVolumeSpecName: "kube-api-access-lkjq5") pod "e396f44e-1f83-4d6d-a81a-6662c419d2df" (UID: "e396f44e-1f83-4d6d-a81a-6662c419d2df"). InnerVolumeSpecName "kube-api-access-lkjq5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:17:10.289041 systemd[1]: var-lib-kubelet-pods-e396f44e\x2d1f83\x2d4d6d\x2da81a\x2d6662c419d2df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlkjq5.mount: Deactivated successfully. Jul 11 00:17:10.289156 systemd[1]: var-lib-kubelet-pods-e396f44e\x2d1f83\x2d4d6d\x2da81a\x2d6662c419d2df-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 11 00:17:10.290367 kubelet[2469]: I0711 00:17:10.290047 2469 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e396f44e-1f83-4d6d-a81a-6662c419d2df-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e396f44e-1f83-4d6d-a81a-6662c419d2df" (UID: "e396f44e-1f83-4d6d-a81a-6662c419d2df"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:17:10.385048 kubelet[2469]: I0711 00:17:10.384991 2469 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lkjq5\" (UniqueName: \"kubernetes.io/projected/e396f44e-1f83-4d6d-a81a-6662c419d2df-kube-api-access-lkjq5\") on node \"localhost\" DevicePath \"\"" Jul 11 00:17:10.385048 kubelet[2469]: I0711 00:17:10.385027 2469 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e396f44e-1f83-4d6d-a81a-6662c419d2df-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 11 00:17:10.511044 systemd[1]: Removed slice kubepods-besteffort-pode396f44e_1f83_4d6d_a81a_6662c419d2df.slice - libcontainer container kubepods-besteffort-pode396f44e_1f83_4d6d_a81a_6662c419d2df.slice. Jul 11 00:17:10.511134 systemd[1]: kubepods-besteffort-pode396f44e_1f83_4d6d_a81a_6662c419d2df.slice: Consumed 1.538s CPU time. Jul 11 00:17:10.841401 kubelet[2469]: I0711 00:17:10.841279 2469 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e396f44e-1f83-4d6d-a81a-6662c419d2df" path="/var/lib/kubelet/pods/e396f44e-1f83-4d6d-a81a-6662c419d2df/volumes" Jul 11 00:17:13.141674 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:42306.service - OpenSSH per-connection server daemon (10.0.0.1:42306). Jul 11 00:17:13.182736 sshd[6801]: Accepted publickey for core from 10.0.0.1 port 42306 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:17:13.184104 sshd[6801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:13.188677 systemd-logind[1421]: New session 19 of user core. Jul 11 00:17:13.198072 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:17:13.309554 sshd[6801]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:13.313022 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:42306.service: Deactivated successfully. Jul 11 00:17:13.315086 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:17:13.315851 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:17:13.316671 systemd-logind[1421]: Removed session 19. Jul 11 00:17:14.841114 kubelet[2469]: E0711 00:17:14.841054 2469 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:18.324510 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:42310.service - OpenSSH per-connection server daemon (10.0.0.1:42310). Jul 11 00:17:18.367617 sshd[6818]: Accepted publickey for core from 10.0.0.1 port 42310 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:17:18.373174 sshd[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:18.379120 systemd-logind[1421]: New session 20 of user core. Jul 11 00:17:18.389047 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:17:18.504398 sshd[6818]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:18.508131 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:42310.service: Deactivated successfully. Jul 11 00:17:18.510513 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:17:18.514446 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:17:18.519477 systemd-logind[1421]: Removed session 20.