Jul 7 06:07:00.886573 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 06:07:00.886595 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 06:07:00.886605 kernel: KASLR enabled Jul 7 06:07:00.886611 kernel: efi: EFI v2.7 by EDK II Jul 7 06:07:00.886617 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 7 06:07:00.886622 kernel: random: crng init done Jul 7 06:07:00.886630 kernel: ACPI: Early table checksum verification disabled Jul 7 06:07:00.886636 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 7 06:07:00.886642 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:07:00.886649 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886656 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886662 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886707 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886714 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886722 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886730 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886737 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886743 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:07:00.886750 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 7 06:07:00.886756 kernel: NUMA: Failed to initialise from firmware Jul 7 06:07:00.886763 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:07:00.886769 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 7 06:07:00.886775 kernel: Zone ranges: Jul 7 06:07:00.886782 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:07:00.886788 kernel: DMA32 empty Jul 7 06:07:00.886796 kernel: Normal empty Jul 7 06:07:00.886802 kernel: Movable zone start for each node Jul 7 06:07:00.886808 kernel: Early memory node ranges Jul 7 06:07:00.886815 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 7 06:07:00.886821 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 7 06:07:00.886828 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 7 06:07:00.886834 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 7 06:07:00.886840 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 7 06:07:00.886847 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 7 06:07:00.886853 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 7 06:07:00.886860 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:07:00.886866 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 7 06:07:00.886874 kernel: psci: probing for conduit method from ACPI. Jul 7 06:07:00.886880 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 06:07:00.886887 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 06:07:00.886896 kernel: psci: Trusted OS migration not required Jul 7 06:07:00.886902 kernel: psci: SMC Calling Convention v1.1 Jul 7 06:07:00.886909 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 06:07:00.886918 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 06:07:00.886924 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 06:07:00.886931 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 7 06:07:00.886938 kernel: Detected PIPT I-cache on CPU0 Jul 7 06:07:00.886945 kernel: CPU features: detected: GIC system register CPU interface Jul 7 06:07:00.886951 kernel: CPU features: detected: Hardware dirty bit management Jul 7 06:07:00.886959 kernel: CPU features: detected: Spectre-v4 Jul 7 06:07:00.886965 kernel: CPU features: detected: Spectre-BHB Jul 7 06:07:00.886972 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 06:07:00.886979 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 06:07:00.886987 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 06:07:00.886994 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 06:07:00.887000 kernel: alternatives: applying boot alternatives Jul 7 06:07:00.887008 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:07:00.887015 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:07:00.887022 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:07:00.887029 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:07:00.887036 kernel: Fallback order for Node 0: 0 Jul 7 06:07:00.887043 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 7 06:07:00.887049 kernel: Policy zone: DMA Jul 7 06:07:00.887056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:07:00.887064 kernel: software IO TLB: area num 4. Jul 7 06:07:00.887071 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 7 06:07:00.887078 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 7 06:07:00.887085 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:07:00.887092 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:07:00.887099 kernel: rcu: RCU event tracing is enabled. Jul 7 06:07:00.887106 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:07:00.887113 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:07:00.887120 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:07:00.887127 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:07:00.887134 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:07:00.887140 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 06:07:00.887148 kernel: GICv3: 256 SPIs implemented Jul 7 06:07:00.887155 kernel: GICv3: 0 Extended SPIs implemented Jul 7 06:07:00.887162 kernel: Root IRQ handler: gic_handle_irq Jul 7 06:07:00.887169 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 06:07:00.887175 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 06:07:00.887182 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 06:07:00.887189 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 06:07:00.887196 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 06:07:00.887203 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 7 06:07:00.887210 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 7 06:07:00.887224 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:07:00.887233 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:07:00.887240 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 06:07:00.887247 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 06:07:00.887254 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 06:07:00.887261 kernel: arm-pv: using stolen time PV Jul 7 06:07:00.887268 kernel: Console: colour dummy device 80x25 Jul 7 06:07:00.887275 kernel: ACPI: Core revision 20230628 Jul 7 06:07:00.887283 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 06:07:00.887290 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:07:00.887297 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 06:07:00.887305 kernel: landlock: Up and running. Jul 7 06:07:00.887312 kernel: SELinux: Initializing. Jul 7 06:07:00.887319 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:07:00.887326 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:07:00.887333 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:07:00.887341 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:07:00.887348 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:07:00.887355 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:07:00.887362 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 06:07:00.887370 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 06:07:00.887377 kernel: Remapping and enabling EFI services. Jul 7 06:07:00.887384 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:07:00.887391 kernel: Detected PIPT I-cache on CPU1 Jul 7 06:07:00.887398 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 06:07:00.887405 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 7 06:07:00.887412 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:07:00.887419 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 06:07:00.887426 kernel: Detected PIPT I-cache on CPU2 Jul 7 06:07:00.887433 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 7 06:07:00.887441 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 7 06:07:00.887449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:07:00.887459 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 7 06:07:00.887468 kernel: Detected PIPT I-cache on CPU3 Jul 7 06:07:00.887475 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 7 06:07:00.887483 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 7 06:07:00.887490 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:07:00.887497 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 7 06:07:00.887505 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:07:00.887513 kernel: SMP: Total of 4 processors activated. Jul 7 06:07:00.887521 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 06:07:00.887528 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 06:07:00.887535 kernel: CPU features: detected: Common not Private translations Jul 7 06:07:00.887543 kernel: CPU features: detected: CRC32 instructions Jul 7 06:07:00.887550 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 06:07:00.887557 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 06:07:00.887565 kernel: CPU features: detected: LSE atomic instructions Jul 7 06:07:00.887573 kernel: CPU features: detected: Privileged Access Never Jul 7 06:07:00.887580 kernel: CPU features: detected: RAS Extension Support Jul 7 06:07:00.887588 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 06:07:00.887595 kernel: CPU: All CPU(s) started at EL1 Jul 7 06:07:00.887602 kernel: alternatives: applying system-wide alternatives Jul 7 06:07:00.887610 kernel: devtmpfs: initialized Jul 7 06:07:00.887617 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:07:00.887624 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:07:00.887632 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:07:00.887640 kernel: SMBIOS 3.0.0 present. Jul 7 06:07:00.887648 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 7 06:07:00.887655 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:07:00.887662 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 06:07:00.887685 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 06:07:00.887693 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 06:07:00.887700 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:07:00.887708 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Jul 7 06:07:00.887715 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:07:00.887724 kernel: cpuidle: using governor menu Jul 7 06:07:00.887732 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 06:07:00.887739 kernel: ASID allocator initialised with 32768 entries Jul 7 06:07:00.887746 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:07:00.887754 kernel: Serial: AMBA PL011 UART driver Jul 7 06:07:00.887761 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 06:07:00.887768 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 06:07:00.887776 kernel: Modules: 509008 pages in range for PLT usage Jul 7 06:07:00.887783 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:07:00.887791 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:07:00.887799 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 06:07:00.887806 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 06:07:00.887814 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:07:00.887821 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:07:00.887829 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 06:07:00.887836 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 06:07:00.887843 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:07:00.887851 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:07:00.887859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:07:00.887866 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:07:00.887874 kernel: ACPI: Interpreter enabled Jul 7 06:07:00.887881 kernel: ACPI: Using GIC for interrupt routing Jul 7 06:07:00.887888 kernel: ACPI: MCFG table detected, 1 entries Jul 7 06:07:00.887896 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 06:07:00.887903 kernel: printk: console [ttyAMA0] enabled Jul 7 06:07:00.887910 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:07:00.888035 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:07:00.888108 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 06:07:00.888173 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 06:07:00.888246 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 06:07:00.888311 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 06:07:00.888321 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 06:07:00.888328 kernel: PCI host bridge to bus 0000:00 Jul 7 06:07:00.888398 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 06:07:00.888459 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 06:07:00.888516 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 06:07:00.888572 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:07:00.888655 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 06:07:00.888759 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 06:07:00.888828 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 7 06:07:00.888901 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 7 06:07:00.888968 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:07:00.889055 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:07:00.889123 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 7 06:07:00.889188 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 7 06:07:00.889257 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 06:07:00.889316 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 06:07:00.889376 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 06:07:00.889386 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 06:07:00.889394 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 06:07:00.889401 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 06:07:00.889408 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 06:07:00.889416 kernel: iommu: Default domain type: Translated Jul 7 06:07:00.889423 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 06:07:00.889431 kernel: efivars: Registered efivars operations Jul 7 06:07:00.889440 kernel: vgaarb: loaded Jul 7 06:07:00.889447 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 06:07:00.889454 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:07:00.889462 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:07:00.889469 kernel: pnp: PnP ACPI init Jul 7 06:07:00.889539 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 06:07:00.889549 kernel: pnp: PnP ACPI: found 1 devices Jul 7 06:07:00.889557 kernel: NET: Registered PF_INET protocol family Jul 7 06:07:00.889565 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:07:00.889574 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:07:00.889582 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:07:00.889589 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:07:00.889596 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:07:00.889604 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:07:00.889611 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:07:00.889619 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:07:00.889626 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:07:00.889635 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:07:00.889642 kernel: kvm [1]: HYP mode not available Jul 7 06:07:00.889650 kernel: Initialise system trusted keyrings Jul 7 06:07:00.889657 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:07:00.889664 kernel: Key type asymmetric registered Jul 7 06:07:00.889681 kernel: Asymmetric key parser 'x509' registered Jul 7 06:07:00.889689 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:07:00.889696 kernel: io scheduler mq-deadline registered Jul 7 06:07:00.889704 kernel: io scheduler kyber registered Jul 7 06:07:00.889711 kernel: io scheduler bfq registered Jul 7 06:07:00.889721 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 06:07:00.889728 kernel: ACPI: button: Power Button [PWRB] Jul 7 06:07:00.889736 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 06:07:00.889808 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 7 06:07:00.889819 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:07:00.889826 kernel: thunder_xcv, ver 1.0 Jul 7 06:07:00.889834 kernel: thunder_bgx, ver 1.0 Jul 7 06:07:00.889841 kernel: nicpf, ver 1.0 Jul 7 06:07:00.889848 kernel: nicvf, ver 1.0 Jul 7 06:07:00.889921 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 06:07:00.889983 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T06:07:00 UTC (1751868420) Jul 7 06:07:00.889993 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:07:00.890000 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 06:07:00.890008 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 06:07:00.890015 kernel: watchdog: Hard watchdog permanently disabled Jul 7 06:07:00.890022 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:07:00.890030 kernel: Segment Routing with IPv6 Jul 7 06:07:00.890039 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:07:00.890046 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:07:00.890053 kernel: Key type dns_resolver registered Jul 7 06:07:00.890060 kernel: registered taskstats version 1 Jul 7 06:07:00.890068 kernel: Loading compiled-in X.509 certificates Jul 7 06:07:00.890076 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 06:07:00.890083 kernel: Key type .fscrypt registered Jul 7 06:07:00.890090 kernel: Key type fscrypt-provisioning registered Jul 7 06:07:00.890098 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:07:00.890106 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:07:00.890114 kernel: ima: No architecture policies found Jul 7 06:07:00.890121 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 06:07:00.890129 kernel: clk: Disabling unused clocks Jul 7 06:07:00.890136 kernel: Freeing unused kernel memory: 39424K Jul 7 06:07:00.890143 kernel: Run /init as init process Jul 7 06:07:00.890150 kernel: with arguments: Jul 7 06:07:00.890157 kernel: /init Jul 7 06:07:00.890165 kernel: with environment: Jul 7 06:07:00.890173 kernel: HOME=/ Jul 7 06:07:00.890180 kernel: TERM=linux Jul 7 06:07:00.890187 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:07:00.890197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:07:00.890206 systemd[1]: Detected virtualization kvm. Jul 7 06:07:00.890214 systemd[1]: Detected architecture arm64. Jul 7 06:07:00.890229 systemd[1]: Running in initrd. Jul 7 06:07:00.890238 systemd[1]: No hostname configured, using default hostname. Jul 7 06:07:00.890246 systemd[1]: Hostname set to . Jul 7 06:07:00.890254 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:07:00.890262 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:07:00.890270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:07:00.890278 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:07:00.890287 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:07:00.890295 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:07:00.890304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:07:00.890312 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:07:00.890321 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:07:00.890330 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:07:00.890337 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:07:00.890345 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:07:00.890353 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:07:00.890362 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:07:00.890370 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:07:00.890378 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:07:00.890386 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:07:00.890394 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:07:00.890402 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:07:00.890409 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:07:00.890417 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:07:00.890425 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:07:00.890434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:07:00.890442 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:07:00.890450 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:07:00.890458 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:07:00.890466 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:07:00.890474 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:07:00.890482 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:07:00.890489 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:07:00.890499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:00.890507 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:07:00.890515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:07:00.890523 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:07:00.890531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:07:00.890541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:00.890549 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:00.890574 systemd-journald[238]: Collecting audit messages is disabled. Jul 7 06:07:00.890593 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:07:00.890603 systemd-journald[238]: Journal started Jul 7 06:07:00.890621 systemd-journald[238]: Runtime Journal (/run/log/journal/41913cccabfa4e5fafbd041c9177a88b) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:07:00.892757 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:07:00.881354 systemd-modules-load[239]: Inserted module 'overlay' Jul 7 06:07:00.894694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:07:00.894723 kernel: Bridge firewalling registered Jul 7 06:07:00.895062 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 7 06:07:00.897459 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:07:00.899750 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:07:00.903314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:07:00.905232 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:07:00.906197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:07:00.908095 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:00.910355 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:07:00.917033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:07:00.919113 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:07:00.925351 dracut-cmdline[273]: dracut-dracut-053 Jul 7 06:07:00.930996 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:07:00.929789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:07:00.959613 systemd-resolved[281]: Positive Trust Anchors: Jul 7 06:07:00.959629 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:07:00.959660 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:07:00.964333 systemd-resolved[281]: Defaulting to hostname 'linux'. Jul 7 06:07:00.965231 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:07:00.967128 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:07:01.000694 kernel: SCSI subsystem initialized Jul 7 06:07:01.004685 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:07:01.014681 kernel: iscsi: registered transport (tcp) Jul 7 06:07:01.024829 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:07:01.024862 kernel: QLogic iSCSI HBA Driver Jul 7 06:07:01.066357 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:07:01.075846 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:07:01.093282 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:07:01.093319 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:07:01.094679 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 06:07:01.139694 kernel: raid6: neonx8 gen() 15791 MB/s Jul 7 06:07:01.156701 kernel: raid6: neonx4 gen() 15675 MB/s Jul 7 06:07:01.173684 kernel: raid6: neonx2 gen() 13211 MB/s Jul 7 06:07:01.190685 kernel: raid6: neonx1 gen() 10489 MB/s Jul 7 06:07:01.207685 kernel: raid6: int64x8 gen() 6953 MB/s Jul 7 06:07:01.224685 kernel: raid6: int64x4 gen() 7331 MB/s Jul 7 06:07:01.241692 kernel: raid6: int64x2 gen() 6131 MB/s Jul 7 06:07:01.258693 kernel: raid6: int64x1 gen() 5061 MB/s Jul 7 06:07:01.258720 kernel: raid6: using algorithm neonx8 gen() 15791 MB/s Jul 7 06:07:01.275697 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Jul 7 06:07:01.275723 kernel: raid6: using neon recovery algorithm Jul 7 06:07:01.280685 kernel: xor: measuring software checksum speed Jul 7 06:07:01.280700 kernel: 8regs : 19344 MB/sec Jul 7 06:07:01.282121 kernel: 32regs : 17780 MB/sec Jul 7 06:07:01.282148 kernel: arm64_neon : 26356 MB/sec Jul 7 06:07:01.282166 kernel: xor: using function: arm64_neon (26356 MB/sec) Jul 7 06:07:01.331695 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:07:01.342587 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:07:01.349810 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:07:01.361702 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 7 06:07:01.364797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:07:01.367382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:07:01.381779 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jul 7 06:07:01.407617 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:07:01.414896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:07:01.455851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:07:01.463859 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:07:01.477289 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:07:01.479095 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:07:01.481470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:07:01.482345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:07:01.492837 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:07:01.495501 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 7 06:07:01.504297 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:07:01.504615 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:07:01.509448 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:07:01.509558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:01.516946 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:07:01.516965 kernel: GPT:9289727 != 19775487 Jul 7 06:07:01.516982 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:07:01.516993 kernel: GPT:9289727 != 19775487 Jul 7 06:07:01.517002 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:07:01.517013 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:07:01.516981 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:01.517769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:01.517914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:01.519581 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:01.532142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:01.535490 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (517) Jul 7 06:07:01.538240 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (522) Jul 7 06:07:01.547432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:07:01.548585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:01.554854 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:07:01.561749 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:07:01.562657 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:07:01.568277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:07:01.579795 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:07:01.581281 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:01.585740 disk-uuid[553]: Primary Header is updated. Jul 7 06:07:01.585740 disk-uuid[553]: Secondary Entries is updated. Jul 7 06:07:01.585740 disk-uuid[553]: Secondary Header is updated. Jul 7 06:07:01.588183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:07:01.600696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:07:01.607032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:02.605699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:07:02.606363 disk-uuid[554]: The operation has completed successfully. Jul 7 06:07:02.628371 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:07:02.628493 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:07:02.645865 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:07:02.649949 sh[577]: Success Jul 7 06:07:02.663688 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 06:07:02.700035 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:07:02.701484 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:07:02.702243 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:07:02.711796 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 06:07:02.711830 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:02.711841 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 06:07:02.713122 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 06:07:02.713137 kernel: BTRFS info (device dm-0): using free space tree Jul 7 06:07:02.716874 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:07:02.717908 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:07:02.731800 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:07:02.733051 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:07:02.739341 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:02.739375 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:02.739386 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:07:02.742747 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:07:02.748991 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 06:07:02.750282 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:02.755570 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:07:02.764824 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:07:02.826401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:07:02.835804 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:07:02.858714 ignition[666]: Ignition 2.19.0 Jul 7 06:07:02.858725 ignition[666]: Stage: fetch-offline Jul 7 06:07:02.858761 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:02.859993 systemd-networkd[766]: lo: Link UP Jul 7 06:07:02.858770 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:07:02.859996 systemd-networkd[766]: lo: Gained carrier Jul 7 06:07:02.858920 ignition[666]: parsed url from cmdline: "" Jul 7 06:07:02.860954 systemd-networkd[766]: Enumeration completed Jul 7 06:07:02.858923 ignition[666]: no config URL provided Jul 7 06:07:02.861086 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:07:02.858927 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:07:02.861546 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:02.858934 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:07:02.861550 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:07:02.858955 ignition[666]: op(1): [started] loading QEMU firmware config module Jul 7 06:07:02.862765 systemd[1]: Reached target network.target - Network. Jul 7 06:07:02.858960 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:07:02.862844 systemd-networkd[766]: eth0: Link UP Jul 7 06:07:02.867143 ignition[666]: op(1): [finished] loading QEMU firmware config module Jul 7 06:07:02.862848 systemd-networkd[766]: eth0: Gained carrier Jul 7 06:07:02.862854 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:02.881721 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:07:02.912138 ignition[666]: parsing config with SHA512: ee0483de4347d7e0379ab98a21b30ba1df92d0661b43b09d9cf32b22eb109db91a806e0ed638e418297eabc98d3b5e84015f32c69797c5564a25411e041247cd Jul 7 06:07:02.916144 unknown[666]: fetched base config from "system" Jul 7 06:07:02.916156 unknown[666]: fetched user config from "qemu" Jul 7 06:07:02.917779 ignition[666]: fetch-offline: fetch-offline passed Jul 7 06:07:02.917927 ignition[666]: Ignition finished successfully Jul 7 06:07:02.920035 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:07:02.921058 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:07:02.927840 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:07:02.937988 ignition[774]: Ignition 2.19.0 Jul 7 06:07:02.937997 ignition[774]: Stage: kargs Jul 7 06:07:02.938152 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:02.938161 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:07:02.939048 ignition[774]: kargs: kargs passed Jul 7 06:07:02.939091 ignition[774]: Ignition finished successfully Jul 7 06:07:02.941007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:07:02.946803 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:07:02.956544 ignition[781]: Ignition 2.19.0 Jul 7 06:07:02.956554 ignition[781]: Stage: disks Jul 7 06:07:02.956767 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:02.956777 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:07:02.957603 ignition[781]: disks: disks passed Jul 7 06:07:02.957645 ignition[781]: Ignition finished successfully Jul 7 06:07:02.960081 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:07:02.960985 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:07:02.962185 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:07:02.963636 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:07:02.965106 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:07:02.966382 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:07:02.973797 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:07:02.983724 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 06:07:02.987398 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:07:02.989753 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:07:03.038113 kernel: EXT4-fs (vda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 06:07:03.038885 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:07:03.039909 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:07:03.046765 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:07:03.048131 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:07:03.049152 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:07:03.049230 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:07:03.049279 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:07:03.056340 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (799) Jul 7 06:07:03.056366 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:03.056377 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:03.054853 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:07:03.058863 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:07:03.059564 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:07:03.062732 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:07:03.063543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:07:03.102375 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:07:03.105602 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:07:03.109648 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:07:03.112394 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:07:03.181461 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:07:03.191771 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:07:03.193181 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:07:03.198695 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:03.212945 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:07:03.218546 ignition[914]: INFO : Ignition 2.19.0 Jul 7 06:07:03.218546 ignition[914]: INFO : Stage: mount Jul 7 06:07:03.219968 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:03.219968 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:07:03.219968 ignition[914]: INFO : mount: mount passed Jul 7 06:07:03.219968 ignition[914]: INFO : Ignition finished successfully Jul 7 06:07:03.221770 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:07:03.233787 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:07:03.711099 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:07:03.719914 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:07:03.724677 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (927) Jul 7 06:07:03.726110 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:03.726124 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:03.726134 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:07:03.728680 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:07:03.729574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:07:03.744673 ignition[944]: INFO : Ignition 2.19.0 Jul 7 06:07:03.744673 ignition[944]: INFO : Stage: files Jul 7 06:07:03.745883 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:03.745883 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:07:03.745883 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:07:03.748424 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:07:03.748424 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:07:03.748424 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:07:03.748424 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:07:03.752291 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:07:03.752291 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 7 06:07:03.752291 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 7 06:07:03.748552 unknown[944]: wrote ssh authorized keys file for user: core Jul 7 06:07:03.805288 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:07:04.098975 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 7 06:07:04.098975 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 06:07:04.101771 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 7 06:07:04.198046 systemd-networkd[766]: eth0: Gained IPv6LL Jul 7 06:07:04.715615 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:07:05.051863 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 06:07:05.051863 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:07:05.054811 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:07:05.054811 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:07:05.054811 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:07:05.054811 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:07:05.054811 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:07:05.054811 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:07:05.054811 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:07:05.054811 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:07:05.079097 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:07:05.082506 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:07:05.084570 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:07:05.084570 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:07:05.084570 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:07:05.084570 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:07:05.084570 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:07:05.084570 ignition[944]: INFO : files: files passed Jul 7 06:07:05.084570 ignition[944]: INFO : Ignition finished successfully Jul 7 06:07:05.085388 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:07:05.095798 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:07:05.097257 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:07:05.098757 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:07:05.098854 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:07:05.104591 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:07:05.106793 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:05.106793 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:05.109413 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:05.110701 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:07:05.111920 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:07:05.125869 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:07:05.143232 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:07:05.143329 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:07:05.144903 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:07:05.146183 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:07:05.147589 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:07:05.148250 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:07:05.162525 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:07:05.172867 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:07:05.179941 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:07:05.180838 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:07:05.182287 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:07:05.183522 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:07:05.183629 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:07:05.185468 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:07:05.186897 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:07:05.188076 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:07:05.189317 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:07:05.190699 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:07:05.192235 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:07:05.193527 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:07:05.195032 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:07:05.196485 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:07:05.197753 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:07:05.198864 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:07:05.198972 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:07:05.200653 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:07:05.202053 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:07:05.203425 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:07:05.206723 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:07:05.207619 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:07:05.207740 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:07:05.209873 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:07:05.209986 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:07:05.211400 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:07:05.212489 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:07:05.215751 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:07:05.216699 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:07:05.218233 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:07:05.219502 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:07:05.219585 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:07:05.220789 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:07:05.220870 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:07:05.221955 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:07:05.222061 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:07:05.223328 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:07:05.223425 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:07:05.238813 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:07:05.240085 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:07:05.240725 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:07:05.240836 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:07:05.242147 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:07:05.242247 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:07:05.246488 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:07:05.246576 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:07:05.250875 ignition[998]: INFO : Ignition 2.19.0 Jul 7 06:07:05.250875 ignition[998]: INFO : Stage: umount Jul 7 06:07:05.252173 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:05.252173 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:07:05.252173 ignition[998]: INFO : umount: umount passed Jul 7 06:07:05.252173 ignition[998]: INFO : Ignition finished successfully Jul 7 06:07:05.252808 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:07:05.254970 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:07:05.255059 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:07:05.256068 systemd[1]: Stopped target network.target - Network. Jul 7 06:07:05.257157 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:07:05.257223 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:07:05.258397 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:07:05.258436 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:07:05.259725 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:07:05.259766 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:07:05.261056 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:07:05.261097 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:07:05.262660 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:07:05.263774 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:07:05.268731 systemd-networkd[766]: eth0: DHCPv6 lease lost Jul 7 06:07:05.270435 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:07:05.271399 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:07:05.272897 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:07:05.272928 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:07:05.282757 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:07:05.283402 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:07:05.283458 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:07:05.285085 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:07:05.288537 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:07:05.288631 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:07:05.298395 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:07:05.298483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:07:05.299384 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:07:05.299425 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:07:05.300765 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:07:05.300807 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:07:05.302497 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:07:05.302623 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:07:05.304024 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:07:05.304104 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:07:05.305979 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:07:05.306027 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:07:05.307543 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:07:05.307575 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:07:05.309078 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:07:05.309126 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:07:05.311313 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:07:05.311354 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:07:05.313499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:07:05.313547 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:05.328801 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:07:05.329599 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:07:05.329647 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:07:05.331413 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:07:05.331458 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:07:05.333032 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:07:05.333071 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:07:05.334746 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:05.334784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:05.336613 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:07:05.336721 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:07:05.338197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:07:05.338275 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:07:05.340408 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:07:05.341333 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:07:05.341393 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:07:05.343593 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:07:05.352527 systemd[1]: Switching root. Jul 7 06:07:05.378534 systemd-journald[238]: Journal stopped Jul 7 06:07:06.055405 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 7 06:07:06.055459 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:07:06.055472 kernel: SELinux: policy capability open_perms=1 Jul 7 06:07:06.055482 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:07:06.055492 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:07:06.055501 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:07:06.055516 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:07:06.055526 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:07:06.055536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:07:06.055546 kernel: audit: type=1403 audit(1751868425.521:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:07:06.055558 systemd[1]: Successfully loaded SELinux policy in 31.551ms. Jul 7 06:07:06.055574 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.447ms. Jul 7 06:07:06.055586 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:07:06.055597 systemd[1]: Detected virtualization kvm. Jul 7 06:07:06.055609 systemd[1]: Detected architecture arm64. Jul 7 06:07:06.055619 systemd[1]: Detected first boot. Jul 7 06:07:06.055629 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:07:06.055640 zram_generator::config[1043]: No configuration found. Jul 7 06:07:06.055653 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:07:06.055664 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:07:06.055688 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:07:06.055698 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:07:06.055709 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:07:06.055720 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:07:06.055731 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:07:06.055742 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:07:06.055752 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:07:06.055765 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:07:06.055776 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:07:06.055791 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:07:06.055802 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:07:06.055813 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:07:06.055824 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:07:06.055837 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:07:06.055847 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:07:06.055859 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:07:06.055870 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 06:07:06.055884 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:07:06.055895 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:07:06.055905 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:07:06.055915 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:07:06.055926 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:07:06.055937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:07:06.055949 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:07:06.055959 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:07:06.055970 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:07:06.055980 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:07:06.055991 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:07:06.056001 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:07:06.056012 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:07:06.056022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:07:06.056033 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:07:06.056045 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:07:06.056055 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:07:06.056067 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:07:06.056078 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:07:06.056089 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:07:06.056099 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:07:06.056110 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:07:06.056121 systemd[1]: Reached target machines.target - Containers. Jul 7 06:07:06.056131 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:07:06.056144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:07:06.056154 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:07:06.056165 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:07:06.056175 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:07:06.056186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:07:06.056196 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:07:06.056212 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:07:06.056224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:07:06.056244 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:07:06.056256 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:07:06.056267 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:07:06.056278 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:07:06.056288 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:07:06.056299 kernel: fuse: init (API version 7.39) Jul 7 06:07:06.056310 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:07:06.056321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:07:06.056332 kernel: loop: module loaded Jul 7 06:07:06.056343 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:07:06.056354 kernel: ACPI: bus type drm_connector registered Jul 7 06:07:06.056364 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:07:06.056374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:07:06.056385 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:07:06.056395 systemd[1]: Stopped verity-setup.service. Jul 7 06:07:06.056405 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:07:06.056416 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:07:06.056427 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:07:06.056439 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:07:06.056449 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:07:06.056460 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:07:06.056471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:07:06.056502 systemd-journald[1110]: Collecting audit messages is disabled. Jul 7 06:07:06.056526 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:07:06.056537 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:07:06.056548 systemd-journald[1110]: Journal started Jul 7 06:07:06.056569 systemd-journald[1110]: Runtime Journal (/run/log/journal/41913cccabfa4e5fafbd041c9177a88b) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:07:05.877872 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:07:05.896428 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:07:05.896769 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:07:06.058689 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:07:06.059330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:07:06.059475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:07:06.060657 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:07:06.060826 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:07:06.061923 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:07:06.063215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:07:06.063359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:07:06.064612 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:07:06.064790 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:07:06.065836 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:07:06.065961 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:07:06.067222 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:07:06.068445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:07:06.069693 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:07:06.081295 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:07:06.090776 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:07:06.092684 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:07:06.093552 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:07:06.093590 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:07:06.095323 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 06:07:06.097243 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:07:06.099107 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:07:06.100078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:07:06.101318 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:07:06.103081 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:07:06.104097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:07:06.107822 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:07:06.108692 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:07:06.109816 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:07:06.110048 systemd-journald[1110]: Time spent on flushing to /var/log/journal/41913cccabfa4e5fafbd041c9177a88b is 18.943ms for 854 entries. Jul 7 06:07:06.110048 systemd-journald[1110]: System Journal (/var/log/journal/41913cccabfa4e5fafbd041c9177a88b) is 8.0M, max 195.6M, 187.6M free. Jul 7 06:07:06.144852 systemd-journald[1110]: Received client request to flush runtime journal. Jul 7 06:07:06.144900 kernel: loop0: detected capacity change from 0 to 114328 Jul 7 06:07:06.113249 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:07:06.116838 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:07:06.119906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:07:06.123858 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:07:06.132800 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:07:06.134166 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:07:06.135443 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:07:06.139458 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:07:06.147850 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 06:07:06.151860 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 06:07:06.153687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:07:06.154487 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:07:06.156129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:07:06.166209 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jul 7 06:07:06.166261 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jul 7 06:07:06.170315 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:07:06.172331 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 06:07:06.173126 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:07:06.174082 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 06:07:06.183878 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:07:06.191710 kernel: loop1: detected capacity change from 0 to 211168 Jul 7 06:07:06.205047 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:07:06.210886 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:07:06.215699 kernel: loop2: detected capacity change from 0 to 114432 Jul 7 06:07:06.224995 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 7 06:07:06.225011 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 7 06:07:06.229522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:07:06.249787 kernel: loop3: detected capacity change from 0 to 114328 Jul 7 06:07:06.254683 kernel: loop4: detected capacity change from 0 to 211168 Jul 7 06:07:06.260837 kernel: loop5: detected capacity change from 0 to 114432 Jul 7 06:07:06.266984 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:07:06.267373 (sd-merge)[1182]: Merged extensions into '/usr'. Jul 7 06:07:06.271211 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:07:06.271230 systemd[1]: Reloading... Jul 7 06:07:06.339749 zram_generator::config[1208]: No configuration found. Jul 7 06:07:06.384058 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:07:06.438714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:07:06.475452 systemd[1]: Reloading finished in 203 ms. Jul 7 06:07:06.506063 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:07:06.511715 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:07:06.525857 systemd[1]: Starting ensure-sysext.service... Jul 7 06:07:06.527612 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:07:06.540468 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:07:06.540485 systemd[1]: Reloading... Jul 7 06:07:06.551136 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:07:06.551438 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:07:06.552118 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:07:06.552359 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jul 7 06:07:06.552412 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jul 7 06:07:06.554653 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:07:06.554679 systemd-tmpfiles[1243]: Skipping /boot Jul 7 06:07:06.561733 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:07:06.561748 systemd-tmpfiles[1243]: Skipping /boot Jul 7 06:07:06.591692 zram_generator::config[1270]: No configuration found. Jul 7 06:07:06.672135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:07:06.708342 systemd[1]: Reloading finished in 167 ms. Jul 7 06:07:06.726308 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:07:06.740196 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:07:06.747153 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:07:06.750015 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:07:06.752339 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:07:06.756977 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:07:06.761999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:07:06.768580 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:07:06.774989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:07:06.777822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:07:06.782917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:07:06.786916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:07:06.787878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:07:06.791911 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:07:06.795907 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:07:06.797185 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:07:06.797331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:07:06.797727 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Jul 7 06:07:06.800210 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:07:06.800352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:07:06.801951 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:07:06.802093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:07:06.808831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:07:06.809064 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:07:06.821331 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:07:06.822598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:07:06.824238 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:07:06.826930 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:07:06.828406 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:07:06.839078 augenrules[1335]: No rules Jul 7 06:07:06.841165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:07:06.848949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:07:06.852450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:07:06.856869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:07:06.861568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:07:06.862429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:07:06.866187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:07:06.868756 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:07:06.869069 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:07:06.870323 systemd[1]: Finished ensure-sysext.service. Jul 7 06:07:06.871217 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:07:06.872633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:07:06.872778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:07:06.874855 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:07:06.875012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:07:06.876685 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:07:06.876829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:07:06.882353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:07:06.884167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:07:06.888711 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1342) Jul 7 06:07:06.891981 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 06:07:06.896097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:07:06.896150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:07:06.900124 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:07:06.936979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:07:06.946862 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:07:06.954758 systemd-resolved[1310]: Positive Trust Anchors: Jul 7 06:07:06.954776 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:07:06.954809 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:07:06.968862 systemd-networkd[1373]: lo: Link UP Jul 7 06:07:06.968869 systemd-networkd[1373]: lo: Gained carrier Jul 7 06:07:06.969624 systemd-networkd[1373]: Enumeration completed Jul 7 06:07:06.969760 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:07:06.970786 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:06.970797 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:07:06.972751 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jul 7 06:07:06.975044 systemd-networkd[1373]: eth0: Link UP Jul 7 06:07:06.975053 systemd-networkd[1373]: eth0: Gained carrier Jul 7 06:07:06.975068 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:06.983918 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:07:06.985379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:07:06.986459 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:07:06.987737 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:07:06.988753 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:07:06.991864 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 7 06:07:06.992919 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:07:06.992965 systemd-timesyncd[1382]: Initial clock synchronization to Mon 2025-07-07 06:07:06.981146 UTC. Jul 7 06:07:06.994263 systemd[1]: Reached target network.target - Network. Jul 7 06:07:06.995710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:07:06.996656 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:07:07.010413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:07.021632 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 06:07:07.036871 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 06:07:07.059136 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:07:07.064625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:07.105173 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 06:07:07.106737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:07:07.107840 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:07:07.109006 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:07:07.110225 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:07:07.111647 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:07:07.112944 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:07:07.114177 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:07:07.115383 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:07:07.115421 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:07:07.116352 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:07:07.117805 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:07:07.120238 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:07:07.129726 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:07:07.131958 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 06:07:07.133731 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:07:07.134898 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:07:07.135891 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:07:07.136862 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:07:07.136899 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:07:07.137925 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:07:07.140070 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:07:07.141893 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:07:07.142801 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:07:07.149774 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:07:07.150855 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:07:07.152650 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:07:07.159321 jq[1410]: false Jul 7 06:07:07.165859 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:07:07.165398 dbus-daemon[1409]: [system] SELinux support is enabled Jul 7 06:07:07.168978 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:07:07.170208 extend-filesystems[1411]: Found loop3 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found loop4 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found loop5 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found vda Jul 7 06:07:07.170208 extend-filesystems[1411]: Found vda1 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found vda2 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found vda3 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found usr Jul 7 06:07:07.170208 extend-filesystems[1411]: Found vda4 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found vda6 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found vda7 Jul 7 06:07:07.170208 extend-filesystems[1411]: Found vda9 Jul 7 06:07:07.170208 extend-filesystems[1411]: Checking size of /dev/vda9 Jul 7 06:07:07.173135 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:07:07.179355 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:07:07.183120 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:07:07.183590 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:07:07.184713 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:07:07.190125 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:07:07.192707 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:07:07.193010 extend-filesystems[1411]: Resized partition /dev/vda9 Jul 7 06:07:07.196562 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 06:07:07.199870 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:07:07.201707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:07:07.202131 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:07:07.202732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:07:07.206062 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:07:07.206259 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:07:07.217744 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1340) Jul 7 06:07:07.219133 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) Jul 7 06:07:07.220044 jq[1430]: true Jul 7 06:07:07.221461 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:07:07.225689 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:07:07.237078 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:07:07.237119 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:07:07.238689 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:07:07.238719 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:07:07.244824 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:07:07.259593 jq[1445]: true Jul 7 06:07:07.272478 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:07:07.272478 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:07:07.272478 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:07:07.285462 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Jul 7 06:07:07.274385 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 06:07:07.274387 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:07:07.275900 systemd-logind[1424]: New seat seat0. Jul 7 06:07:07.277260 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:07:07.285307 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:07:07.289797 tar[1434]: linux-arm64/LICENSE Jul 7 06:07:07.289797 tar[1434]: linux-arm64/helm Jul 7 06:07:07.304725 update_engine[1428]: I20250707 06:07:07.304132 1428 main.cc:92] Flatcar Update Engine starting Jul 7 06:07:07.307646 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:07:07.307829 update_engine[1428]: I20250707 06:07:07.307790 1428 update_check_scheduler.cc:74] Next update check in 4m15s Jul 7 06:07:07.321518 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:07:07.342325 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:07:07.344306 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:07:07.352643 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:07:07.377007 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:07:07.471742 containerd[1436]: time="2025-07-07T06:07:07.469282935Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 06:07:07.499276 containerd[1436]: time="2025-07-07T06:07:07.499179021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:07.501703 containerd[1436]: time="2025-07-07T06:07:07.501526943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:07.501703 containerd[1436]: time="2025-07-07T06:07:07.501655050Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 06:07:07.501703 containerd[1436]: time="2025-07-07T06:07:07.501687946Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 06:07:07.501866 containerd[1436]: time="2025-07-07T06:07:07.501844311Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 06:07:07.501891 containerd[1436]: time="2025-07-07T06:07:07.501869733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:07.501945 containerd[1436]: time="2025-07-07T06:07:07.501927650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:07.501967 containerd[1436]: time="2025-07-07T06:07:07.501944878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:07.502148 containerd[1436]: time="2025-07-07T06:07:07.502116512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:07.502148 containerd[1436]: time="2025-07-07T06:07:07.502139495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:07.502195 containerd[1436]: time="2025-07-07T06:07:07.502154205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:07.502195 containerd[1436]: time="2025-07-07T06:07:07.502164757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:07.502259 containerd[1436]: time="2025-07-07T06:07:07.502243979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:07.502465 containerd[1436]: time="2025-07-07T06:07:07.502439276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:07.502563 containerd[1436]: time="2025-07-07T06:07:07.502545039Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:07.502585 containerd[1436]: time="2025-07-07T06:07:07.502563985Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 06:07:07.502651 containerd[1436]: time="2025-07-07T06:07:07.502637091Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 06:07:07.502743 containerd[1436]: time="2025-07-07T06:07:07.502726626Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:07:07.506212 containerd[1436]: time="2025-07-07T06:07:07.506168747Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 06:07:07.506278 containerd[1436]: time="2025-07-07T06:07:07.506229343Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 06:07:07.506278 containerd[1436]: time="2025-07-07T06:07:07.506245971Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 06:07:07.506278 containerd[1436]: time="2025-07-07T06:07:07.506260240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 06:07:07.506278 containerd[1436]: time="2025-07-07T06:07:07.506273990Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 06:07:07.506436 containerd[1436]: time="2025-07-07T06:07:07.506407373Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 06:07:07.506775 containerd[1436]: time="2025-07-07T06:07:07.506751361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 06:07:07.506906 containerd[1436]: time="2025-07-07T06:07:07.506884384Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 06:07:07.506932 containerd[1436]: time="2025-07-07T06:07:07.506909765Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 06:07:07.506932 containerd[1436]: time="2025-07-07T06:07:07.506924035Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 06:07:07.506966 containerd[1436]: time="2025-07-07T06:07:07.506937705Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 06:07:07.506966 containerd[1436]: time="2025-07-07T06:07:07.506950375Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 06:07:07.506966 containerd[1436]: time="2025-07-07T06:07:07.506962087Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 06:07:07.507029 containerd[1436]: time="2025-07-07T06:07:07.506976756Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 06:07:07.507029 containerd[1436]: time="2025-07-07T06:07:07.506991106Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 06:07:07.507029 containerd[1436]: time="2025-07-07T06:07:07.507003217Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 06:07:07.507029 containerd[1436]: time="2025-07-07T06:07:07.507015008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 06:07:07.507029 containerd[1436]: time="2025-07-07T06:07:07.507026999Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 06:07:07.507115 containerd[1436]: time="2025-07-07T06:07:07.507046665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507115 containerd[1436]: time="2025-07-07T06:07:07.507060455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507115 containerd[1436]: time="2025-07-07T06:07:07.507072646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507115 containerd[1436]: time="2025-07-07T06:07:07.507091392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507115 containerd[1436]: time="2025-07-07T06:07:07.507103943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507204 containerd[1436]: time="2025-07-07T06:07:07.507117333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507204 containerd[1436]: time="2025-07-07T06:07:07.507129724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507204 containerd[1436]: time="2025-07-07T06:07:07.507142315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507204 containerd[1436]: time="2025-07-07T06:07:07.507155265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507204 containerd[1436]: time="2025-07-07T06:07:07.507169335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507204 containerd[1436]: time="2025-07-07T06:07:07.507181606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507204 containerd[1436]: time="2025-07-07T06:07:07.507193557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507321 containerd[1436]: time="2025-07-07T06:07:07.507209945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507321 containerd[1436]: time="2025-07-07T06:07:07.507225174Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 06:07:07.507321 containerd[1436]: time="2025-07-07T06:07:07.507245559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507321 containerd[1436]: time="2025-07-07T06:07:07.507257351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507321 containerd[1436]: time="2025-07-07T06:07:07.507267663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 06:07:07.507415 containerd[1436]: time="2025-07-07T06:07:07.507382859Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 06:07:07.507415 containerd[1436]: time="2025-07-07T06:07:07.507399647Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 06:07:07.507415 containerd[1436]: time="2025-07-07T06:07:07.507410039Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 06:07:07.507468 containerd[1436]: time="2025-07-07T06:07:07.507422710Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 06:07:07.507468 containerd[1436]: time="2025-07-07T06:07:07.507432143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507468 containerd[1436]: time="2025-07-07T06:07:07.507444334Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 06:07:07.507468 containerd[1436]: time="2025-07-07T06:07:07.507453487Z" level=info msg="NRI interface is disabled by configuration." Jul 7 06:07:07.507468 containerd[1436]: time="2025-07-07T06:07:07.507464479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 06:07:07.507900 containerd[1436]: time="2025-07-07T06:07:07.507815582Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 06:07:07.507900 containerd[1436]: time="2025-07-07T06:07:07.507895324Z" level=info msg="Connect containerd service" Jul 7 06:07:07.508033 containerd[1436]: time="2025-07-07T06:07:07.507924103Z" level=info msg="using legacy CRI server" Jul 7 06:07:07.508033 containerd[1436]: time="2025-07-07T06:07:07.507931298Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:07:07.508033 containerd[1436]: time="2025-07-07T06:07:07.508007522Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 06:07:07.508613 containerd[1436]: time="2025-07-07T06:07:07.508585699Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.508769884Z" level=info msg="Start subscribing containerd event" Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.508822166Z" level=info msg="Start recovering state" Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.508892514Z" level=info msg="Start event monitor" Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.508903626Z" level=info msg="Start snapshots syncer" Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.508913419Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.508921173Z" level=info msg="Start streaming server" Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.509273316Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.509326917Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:07:07.509399 containerd[1436]: time="2025-07-07T06:07:07.509377520Z" level=info msg="containerd successfully booted in 0.041415s" Jul 7 06:07:07.509463 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:07:07.692850 tar[1434]: linux-arm64/README.md Jul 7 06:07:07.706254 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:07:08.293912 systemd-networkd[1373]: eth0: Gained IPv6LL Jul 7 06:07:08.297287 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:07:08.300909 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:07:08.308952 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:07:08.311214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:08.313110 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:07:08.340969 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:07:08.342329 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:07:08.342482 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:07:08.345131 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:07:08.666248 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:07:08.685352 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:07:08.696060 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:07:08.700547 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:07:08.700749 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:07:08.703516 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:07:08.717400 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:07:08.719949 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:07:08.721770 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 06:07:08.722845 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:07:08.858714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:08.859928 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:07:08.860794 systemd[1]: Startup finished in 532ms (kernel) + 4.822s (initrd) + 3.371s (userspace) = 8.726s. Jul 7 06:07:08.862396 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:07:09.275940 kubelet[1521]: E0707 06:07:09.275869 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:07:09.278383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:07:09.278527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:07:12.995180 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:07:12.996320 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:40256.service - OpenSSH per-connection server daemon (10.0.0.1:40256). Jul 7 06:07:13.063440 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 40256 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:07:13.067171 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:13.085752 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:07:13.096946 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:07:13.098747 systemd-logind[1424]: New session 1 of user core. Jul 7 06:07:13.110393 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:07:13.122026 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:07:13.124520 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:07:13.204085 systemd[1539]: Queued start job for default target default.target. Jul 7 06:07:13.213625 systemd[1539]: Created slice app.slice - User Application Slice. Jul 7 06:07:13.213658 systemd[1539]: Reached target paths.target - Paths. Jul 7 06:07:13.213689 systemd[1539]: Reached target timers.target - Timers. Jul 7 06:07:13.214946 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:07:13.224376 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:07:13.224439 systemd[1539]: Reached target sockets.target - Sockets. Jul 7 06:07:13.224451 systemd[1539]: Reached target basic.target - Basic System. Jul 7 06:07:13.224489 systemd[1539]: Reached target default.target - Main User Target. Jul 7 06:07:13.224515 systemd[1539]: Startup finished in 95ms. Jul 7 06:07:13.224856 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:07:13.226186 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:07:13.285959 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:40262.service - OpenSSH per-connection server daemon (10.0.0.1:40262). Jul 7 06:07:13.323880 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 40262 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:07:13.325104 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:13.329843 systemd-logind[1424]: New session 2 of user core. Jul 7 06:07:13.340536 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:07:13.395037 sshd[1550]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:13.407914 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:40262.service: Deactivated successfully. Jul 7 06:07:13.409172 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:07:13.412110 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:07:13.413099 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:40274.service - OpenSSH per-connection server daemon (10.0.0.1:40274). Jul 7 06:07:13.416086 systemd-logind[1424]: Removed session 2. Jul 7 06:07:13.446235 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 40274 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:07:13.448290 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:13.453845 systemd-logind[1424]: New session 3 of user core. Jul 7 06:07:13.463870 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:07:13.512112 sshd[1557]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:13.531014 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:40274.service: Deactivated successfully. Jul 7 06:07:13.532471 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:07:13.533856 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:07:13.547921 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:40286.service - OpenSSH per-connection server daemon (10.0.0.1:40286). Jul 7 06:07:13.548946 systemd-logind[1424]: Removed session 3. Jul 7 06:07:13.577201 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 40286 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:07:13.578320 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:13.582100 systemd-logind[1424]: New session 4 of user core. Jul 7 06:07:13.594797 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:07:13.645908 sshd[1564]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:13.653989 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:40286.service: Deactivated successfully. Jul 7 06:07:13.655317 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:07:13.657743 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:07:13.658780 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:40292.service - OpenSSH per-connection server daemon (10.0.0.1:40292). Jul 7 06:07:13.659564 systemd-logind[1424]: Removed session 4. Jul 7 06:07:13.690049 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 40292 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:07:13.691245 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:13.695205 systemd-logind[1424]: New session 5 of user core. Jul 7 06:07:13.708795 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:07:13.764586 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:07:13.764912 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:07:13.779450 sudo[1574]: pam_unix(sudo:session): session closed for user root Jul 7 06:07:13.780989 sshd[1571]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:13.790946 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:40292.service: Deactivated successfully. Jul 7 06:07:13.792303 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:07:13.794813 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:07:13.795961 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:40302.service - OpenSSH per-connection server daemon (10.0.0.1:40302). Jul 7 06:07:13.796695 systemd-logind[1424]: Removed session 5. Jul 7 06:07:13.827755 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 40302 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:07:13.828919 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:13.832750 systemd-logind[1424]: New session 6 of user core. Jul 7 06:07:13.842797 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:07:13.892133 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:07:13.892659 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:07:13.895502 sudo[1583]: pam_unix(sudo:session): session closed for user root Jul 7 06:07:13.899860 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 06:07:13.900139 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:07:13.916877 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 06:07:13.917952 auditctl[1586]: No rules Jul 7 06:07:13.918244 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:07:13.919694 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 06:07:13.921660 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:07:13.943777 augenrules[1604]: No rules Jul 7 06:07:13.945019 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:07:13.945898 sudo[1582]: pam_unix(sudo:session): session closed for user root Jul 7 06:07:13.947375 sshd[1579]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:13.959932 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:40302.service: Deactivated successfully. Jul 7 06:07:13.961325 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:07:13.962846 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:07:13.973905 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:40314.service - OpenSSH per-connection server daemon (10.0.0.1:40314). Jul 7 06:07:13.974955 systemd-logind[1424]: Removed session 6. Jul 7 06:07:14.002637 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 40314 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:07:14.003733 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:14.007139 systemd-logind[1424]: New session 7 of user core. Jul 7 06:07:14.018792 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:07:14.068853 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:07:14.069132 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:07:14.371885 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:07:14.372110 (dockerd)[1634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:07:14.630841 dockerd[1634]: time="2025-07-07T06:07:14.630614439Z" level=info msg="Starting up" Jul 7 06:07:14.780040 dockerd[1634]: time="2025-07-07T06:07:14.779795823Z" level=info msg="Loading containers: start." Jul 7 06:07:14.865690 kernel: Initializing XFRM netlink socket Jul 7 06:07:14.926753 systemd-networkd[1373]: docker0: Link UP Jul 7 06:07:14.943910 dockerd[1634]: time="2025-07-07T06:07:14.943865276Z" level=info msg="Loading containers: done." Jul 7 06:07:14.958461 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck123491463-merged.mount: Deactivated successfully. Jul 7 06:07:14.959459 dockerd[1634]: time="2025-07-07T06:07:14.959422233Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:07:14.959548 dockerd[1634]: time="2025-07-07T06:07:14.959533661Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 06:07:14.959652 dockerd[1634]: time="2025-07-07T06:07:14.959632415Z" level=info msg="Daemon has completed initialization" Jul 7 06:07:14.989010 dockerd[1634]: time="2025-07-07T06:07:14.988884916Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:07:14.989344 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:07:15.519814 containerd[1436]: time="2025-07-07T06:07:15.519773337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 06:07:16.167987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382065837.mount: Deactivated successfully. Jul 7 06:07:17.207727 containerd[1436]: time="2025-07-07T06:07:17.207638963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:17.208857 containerd[1436]: time="2025-07-07T06:07:17.208824469Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 7 06:07:17.210732 containerd[1436]: time="2025-07-07T06:07:17.209529718Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:17.212234 containerd[1436]: time="2025-07-07T06:07:17.212205852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:17.213649 containerd[1436]: time="2025-07-07T06:07:17.213403752Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.693582436s" Jul 7 06:07:17.213649 containerd[1436]: time="2025-07-07T06:07:17.213448975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 7 06:07:17.216639 containerd[1436]: time="2025-07-07T06:07:17.216593768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 06:07:18.439581 containerd[1436]: time="2025-07-07T06:07:18.439522474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:18.441566 containerd[1436]: time="2025-07-07T06:07:18.441514158Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 7 06:07:18.442686 containerd[1436]: time="2025-07-07T06:07:18.442636674Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:18.445879 containerd[1436]: time="2025-07-07T06:07:18.445831965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:18.447201 containerd[1436]: time="2025-07-07T06:07:18.446919654Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.230173864s" Jul 7 06:07:18.447201 containerd[1436]: time="2025-07-07T06:07:18.446951723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 7 06:07:18.447711 containerd[1436]: time="2025-07-07T06:07:18.447587854Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 06:07:19.528506 containerd[1436]: time="2025-07-07T06:07:19.528453069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:19.528847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:07:19.530208 containerd[1436]: time="2025-07-07T06:07:19.530168851Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 7 06:07:19.531605 containerd[1436]: time="2025-07-07T06:07:19.531160956Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:19.537304 containerd[1436]: time="2025-07-07T06:07:19.537250224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:19.537880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:19.538521 containerd[1436]: time="2025-07-07T06:07:19.538388240Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.090764078s" Jul 7 06:07:19.538521 containerd[1436]: time="2025-07-07T06:07:19.538423268Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 7 06:07:19.539062 containerd[1436]: time="2025-07-07T06:07:19.539000194Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 06:07:19.644858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:19.651315 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:07:19.717162 kubelet[1852]: E0707 06:07:19.717094 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:07:19.721609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:07:19.721777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:07:20.639110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687501190.mount: Deactivated successfully. Jul 7 06:07:21.026976 containerd[1436]: time="2025-07-07T06:07:21.026848043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:21.028204 containerd[1436]: time="2025-07-07T06:07:21.028030933Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 7 06:07:21.028933 containerd[1436]: time="2025-07-07T06:07:21.028890278Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:21.033005 containerd[1436]: time="2025-07-07T06:07:21.032973748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:21.033754 containerd[1436]: time="2025-07-07T06:07:21.033601002Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.494563501s" Jul 7 06:07:21.033754 containerd[1436]: time="2025-07-07T06:07:21.033639911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 7 06:07:21.034138 containerd[1436]: time="2025-07-07T06:07:21.034119409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 06:07:21.624702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149829526.mount: Deactivated successfully. Jul 7 06:07:22.460512 containerd[1436]: time="2025-07-07T06:07:22.460465283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:22.461740 containerd[1436]: time="2025-07-07T06:07:22.461418978Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 7 06:07:22.464396 containerd[1436]: time="2025-07-07T06:07:22.462863697Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:22.465830 containerd[1436]: time="2025-07-07T06:07:22.465793483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:22.467701 containerd[1436]: time="2025-07-07T06:07:22.467656565Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.433509565s" Jul 7 06:07:22.467821 containerd[1436]: time="2025-07-07T06:07:22.467802245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 7 06:07:22.468377 containerd[1436]: time="2025-07-07T06:07:22.468338616Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:07:22.929873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082719013.mount: Deactivated successfully. Jul 7 06:07:22.935084 containerd[1436]: time="2025-07-07T06:07:22.935034143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:22.936366 containerd[1436]: time="2025-07-07T06:07:22.936314467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 06:07:22.937251 containerd[1436]: time="2025-07-07T06:07:22.937198702Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:22.939397 containerd[1436]: time="2025-07-07T06:07:22.939363021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:22.941097 containerd[1436]: time="2025-07-07T06:07:22.940964136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 472.468443ms" Jul 7 06:07:22.941097 containerd[1436]: time="2025-07-07T06:07:22.940999126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 06:07:22.941456 containerd[1436]: time="2025-07-07T06:07:22.941415371Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 06:07:23.389148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447439031.mount: Deactivated successfully. Jul 7 06:07:25.381403 containerd[1436]: time="2025-07-07T06:07:25.381334640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:25.382527 containerd[1436]: time="2025-07-07T06:07:25.382491056Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 7 06:07:25.383717 containerd[1436]: time="2025-07-07T06:07:25.383682343Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:25.387115 containerd[1436]: time="2025-07-07T06:07:25.387052812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:25.390539 containerd[1436]: time="2025-07-07T06:07:25.389747915Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.448295673s" Jul 7 06:07:25.390539 containerd[1436]: time="2025-07-07T06:07:25.389794424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 7 06:07:29.448117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:29.467957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:29.490181 systemd[1]: Reloading requested from client PID 2011 ('systemctl') (unit session-7.scope)... Jul 7 06:07:29.490199 systemd[1]: Reloading... Jul 7 06:07:29.575716 zram_generator::config[2050]: No configuration found. Jul 7 06:07:29.704495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:07:29.759902 systemd[1]: Reloading finished in 269 ms. Jul 7 06:07:29.800457 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:07:29.800520 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:07:29.800831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:29.804993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:29.908582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:29.913981 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:07:29.949777 kubelet[2096]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:07:29.949777 kubelet[2096]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:07:29.949777 kubelet[2096]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:07:29.950114 kubelet[2096]: I0707 06:07:29.949824 2096 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:07:31.374937 kubelet[2096]: I0707 06:07:31.374895 2096 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:07:31.374937 kubelet[2096]: I0707 06:07:31.374925 2096 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:07:31.375341 kubelet[2096]: I0707 06:07:31.375130 2096 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:07:31.411033 kubelet[2096]: E0707 06:07:31.411003 2096 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 06:07:31.415798 kubelet[2096]: I0707 06:07:31.415760 2096 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:07:31.422711 kubelet[2096]: E0707 06:07:31.422659 2096 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:07:31.422711 kubelet[2096]: I0707 06:07:31.422707 2096 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:07:31.425106 kubelet[2096]: I0707 06:07:31.425079 2096 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:07:31.425402 kubelet[2096]: I0707 06:07:31.425371 2096 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:07:31.425535 kubelet[2096]: I0707 06:07:31.425393 2096 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:07:31.425612 kubelet[2096]: I0707 06:07:31.425596 2096 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:07:31.425612 kubelet[2096]: I0707 06:07:31.425605 2096 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:07:31.427611 kubelet[2096]: I0707 06:07:31.427589 2096 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:07:31.430045 kubelet[2096]: I0707 06:07:31.430011 2096 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:07:31.430098 kubelet[2096]: I0707 06:07:31.430050 2096 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:07:31.430098 kubelet[2096]: I0707 06:07:31.430090 2096 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:07:31.430159 kubelet[2096]: I0707 06:07:31.430105 2096 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:07:31.431794 kubelet[2096]: I0707 06:07:31.431768 2096 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:07:31.432431 kubelet[2096]: I0707 06:07:31.432404 2096 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:07:31.432541 kubelet[2096]: W0707 06:07:31.432522 2096 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:07:31.432685 kubelet[2096]: E0707 06:07:31.432648 2096 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:07:31.435052 kubelet[2096]: E0707 06:07:31.435009 2096 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:07:31.435242 kubelet[2096]: I0707 06:07:31.435213 2096 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:07:31.435276 kubelet[2096]: I0707 06:07:31.435254 2096 server.go:1289] "Started kubelet" Jul 7 06:07:31.435377 kubelet[2096]: I0707 06:07:31.435350 2096 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:07:31.439614 kubelet[2096]: I0707 06:07:31.439574 2096 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:07:31.442636 kubelet[2096]: I0707 06:07:31.442298 2096 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:07:31.442636 kubelet[2096]: I0707 06:07:31.442522 2096 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:07:31.442932 kubelet[2096]: I0707 06:07:31.442899 2096 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:07:31.443157 kubelet[2096]: I0707 06:07:31.443134 2096 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:07:31.443263 kubelet[2096]: I0707 06:07:31.443249 2096 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:07:31.443868 kubelet[2096]: E0707 06:07:31.443845 2096 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:07:31.444022 kubelet[2096]: E0707 06:07:31.443863 2096 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:07:31.444242 kubelet[2096]: I0707 06:07:31.444211 2096 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:07:31.444816 kubelet[2096]: E0707 06:07:31.444779 2096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Jul 7 06:07:31.445069 kubelet[2096]: I0707 06:07:31.445051 2096 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:07:31.445781 kubelet[2096]: I0707 06:07:31.445233 2096 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:07:31.447364 kubelet[2096]: E0707 06:07:31.445880 2096 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe3132502b58c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:07:31.435230604 +0000 UTC m=+1.517999100,LastTimestamp:2025-07-07 06:07:31.435230604 +0000 UTC m=+1.517999100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:07:31.447693 kubelet[2096]: I0707 06:07:31.447562 2096 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:07:31.448320 kubelet[2096]: I0707 06:07:31.448285 2096 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:07:31.449811 kubelet[2096]: E0707 06:07:31.449784 2096 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:07:31.460043 kubelet[2096]: I0707 06:07:31.460006 2096 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:07:31.460043 kubelet[2096]: I0707 06:07:31.460035 2096 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:07:31.460148 kubelet[2096]: I0707 06:07:31.460049 2096 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:07:31.460148 kubelet[2096]: I0707 06:07:31.460068 2096 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:07:31.461327 kubelet[2096]: I0707 06:07:31.461206 2096 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:07:31.461327 kubelet[2096]: I0707 06:07:31.461233 2096 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:07:31.461327 kubelet[2096]: I0707 06:07:31.461251 2096 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:07:31.461327 kubelet[2096]: I0707 06:07:31.461260 2096 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:07:31.461327 kubelet[2096]: E0707 06:07:31.461298 2096 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:07:31.462020 kubelet[2096]: E0707 06:07:31.461804 2096 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:07:31.463814 kubelet[2096]: I0707 06:07:31.463787 2096 policy_none.go:49] "None policy: Start" Jul 7 06:07:31.463814 kubelet[2096]: I0707 06:07:31.463811 2096 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:07:31.463814 kubelet[2096]: I0707 06:07:31.463830 2096 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:07:31.468906 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:07:31.487273 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:07:31.489860 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:07:31.500412 kubelet[2096]: E0707 06:07:31.500378 2096 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:07:31.500662 kubelet[2096]: I0707 06:07:31.500584 2096 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:07:31.500662 kubelet[2096]: I0707 06:07:31.500602 2096 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:07:31.501274 kubelet[2096]: I0707 06:07:31.500952 2096 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:07:31.501908 kubelet[2096]: E0707 06:07:31.501887 2096 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:07:31.501956 kubelet[2096]: E0707 06:07:31.501933 2096 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:07:31.571105 systemd[1]: Created slice kubepods-burstable-pod7a0bf62a34fd064be7f9b86901c2c279.slice - libcontainer container kubepods-burstable-pod7a0bf62a34fd064be7f9b86901c2c279.slice. Jul 7 06:07:31.583702 kubelet[2096]: E0707 06:07:31.583656 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:31.585685 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 7 06:07:31.587247 kubelet[2096]: E0707 06:07:31.587084 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:31.589273 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 7 06:07:31.590711 kubelet[2096]: E0707 06:07:31.590660 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:31.602810 kubelet[2096]: I0707 06:07:31.602711 2096 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:07:31.605067 kubelet[2096]: E0707 06:07:31.605039 2096 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jul 7 06:07:31.645743 kubelet[2096]: E0707 06:07:31.645597 2096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Jul 7 06:07:31.745202 kubelet[2096]: I0707 06:07:31.745102 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:31.745202 kubelet[2096]: I0707 06:07:31.745141 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:31.745202 kubelet[2096]: I0707 06:07:31.745161 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a0bf62a34fd064be7f9b86901c2c279-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a0bf62a34fd064be7f9b86901c2c279\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:31.745202 kubelet[2096]: I0707 06:07:31.745194 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a0bf62a34fd064be7f9b86901c2c279-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a0bf62a34fd064be7f9b86901c2c279\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:31.745590 kubelet[2096]: I0707 06:07:31.745262 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a0bf62a34fd064be7f9b86901c2c279-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a0bf62a34fd064be7f9b86901c2c279\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:31.745590 kubelet[2096]: I0707 06:07:31.745287 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:31.745590 kubelet[2096]: I0707 06:07:31.745302 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:31.745590 kubelet[2096]: I0707 06:07:31.745333 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:31.745590 kubelet[2096]: I0707 06:07:31.745383 2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:07:31.806200 kubelet[2096]: I0707 06:07:31.806147 2096 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:07:31.806540 kubelet[2096]: E0707 06:07:31.806496 2096 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jul 7 06:07:31.884541 kubelet[2096]: E0707 06:07:31.884193 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:31.884976 containerd[1436]: time="2025-07-07T06:07:31.884936933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a0bf62a34fd064be7f9b86901c2c279,Namespace:kube-system,Attempt:0,}" Jul 7 06:07:31.887415 kubelet[2096]: E0707 06:07:31.887375 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:31.887752 containerd[1436]: time="2025-07-07T06:07:31.887724220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 7 06:07:31.891218 kubelet[2096]: E0707 06:07:31.891189 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:31.891544 containerd[1436]: time="2025-07-07T06:07:31.891519950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 7 06:07:32.047229 kubelet[2096]: E0707 06:07:32.047193 2096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Jul 7 06:07:32.209316 kubelet[2096]: I0707 06:07:32.209017 2096 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:07:32.209437 kubelet[2096]: E0707 06:07:32.209373 2096 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jul 7 06:07:32.273693 kubelet[2096]: E0707 06:07:32.273633 2096 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:07:32.375164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681480868.mount: Deactivated successfully. Jul 7 06:07:32.380291 containerd[1436]: time="2025-07-07T06:07:32.380240424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:07:32.382073 containerd[1436]: time="2025-07-07T06:07:32.382006207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:07:32.382618 containerd[1436]: time="2025-07-07T06:07:32.382575604Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:07:32.385047 containerd[1436]: time="2025-07-07T06:07:32.384934180Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:07:32.386116 containerd[1436]: time="2025-07-07T06:07:32.386021942Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:07:32.386647 containerd[1436]: time="2025-07-07T06:07:32.386617455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 7 06:07:32.387699 containerd[1436]: time="2025-07-07T06:07:32.387052911Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:07:32.390523 containerd[1436]: time="2025-07-07T06:07:32.390492930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:07:32.391334 containerd[1436]: time="2025-07-07T06:07:32.391304772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 505.97982ms" Jul 7 06:07:32.394374 containerd[1436]: time="2025-07-07T06:07:32.394341330Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.766588ms" Jul 7 06:07:32.395488 containerd[1436]: time="2025-07-07T06:07:32.395448009Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 507.655319ms" Jul 7 06:07:32.454770 kubelet[2096]: E0707 06:07:32.454728 2096 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:07:32.507079 kubelet[2096]: E0707 06:07:32.506976 2096 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549408663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549452137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549467614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549539124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549140582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549196974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549212132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549284601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549722457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549774450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549791887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:32.549914 containerd[1436]: time="2025-07-07T06:07:32.549867316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:32.576828 systemd[1]: Started cri-containerd-1c02942052148bece2db75e9dca4305d1199d0f5e7429ac5e015ffb5de80da2a.scope - libcontainer container 1c02942052148bece2db75e9dca4305d1199d0f5e7429ac5e015ffb5de80da2a. Jul 7 06:07:32.577963 systemd[1]: Started cri-containerd-e9a33816a3c7049e04c925d14d508d9faab0c3502f677b1c8ddcc5e2c3b547e3.scope - libcontainer container e9a33816a3c7049e04c925d14d508d9faab0c3502f677b1c8ddcc5e2c3b547e3. Jul 7 06:07:32.580858 systemd[1]: Started cri-containerd-7cad2b51af9065c8e0aa0ea6f984bd5708448cc2f967fafe3bab1194166428e7.scope - libcontainer container 7cad2b51af9065c8e0aa0ea6f984bd5708448cc2f967fafe3bab1194166428e7. Jul 7 06:07:32.611021 containerd[1436]: time="2025-07-07T06:07:32.610946899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c02942052148bece2db75e9dca4305d1199d0f5e7429ac5e015ffb5de80da2a\"" Jul 7 06:07:32.616982 containerd[1436]: time="2025-07-07T06:07:32.616941306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9a33816a3c7049e04c925d14d508d9faab0c3502f677b1c8ddcc5e2c3b547e3\"" Jul 7 06:07:32.619060 kubelet[2096]: E0707 06:07:32.618508 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:32.619060 kubelet[2096]: E0707 06:07:32.619044 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:32.620552 containerd[1436]: time="2025-07-07T06:07:32.620418440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a0bf62a34fd064be7f9b86901c2c279,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cad2b51af9065c8e0aa0ea6f984bd5708448cc2f967fafe3bab1194166428e7\"" Jul 7 06:07:32.621752 kubelet[2096]: E0707 06:07:32.621721 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:32.625582 containerd[1436]: time="2025-07-07T06:07:32.625503299Z" level=info msg="CreateContainer within sandbox \"1c02942052148bece2db75e9dca4305d1199d0f5e7429ac5e015ffb5de80da2a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:07:32.627683 containerd[1436]: time="2025-07-07T06:07:32.627636708Z" level=info msg="CreateContainer within sandbox \"7cad2b51af9065c8e0aa0ea6f984bd5708448cc2f967fafe3bab1194166428e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:07:32.629265 containerd[1436]: time="2025-07-07T06:07:32.629233556Z" level=info msg="CreateContainer within sandbox \"e9a33816a3c7049e04c925d14d508d9faab0c3502f677b1c8ddcc5e2c3b547e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:07:32.641720 containerd[1436]: time="2025-07-07T06:07:32.641653347Z" level=info msg="CreateContainer within sandbox \"7cad2b51af9065c8e0aa0ea6f984bd5708448cc2f967fafe3bab1194166428e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d78b4e786fa9f94362bcce53934ff3645ed9a619205d5773f1d80ddf4959db6\"" Jul 7 06:07:32.643528 containerd[1436]: time="2025-07-07T06:07:32.642405797Z" level=info msg="StartContainer for \"7d78b4e786fa9f94362bcce53934ff3645ed9a619205d5773f1d80ddf4959db6\"" Jul 7 06:07:32.643528 containerd[1436]: time="2025-07-07T06:07:32.642653721Z" level=info msg="CreateContainer within sandbox \"1c02942052148bece2db75e9dca4305d1199d0f5e7429ac5e015ffb5de80da2a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"256bac112162eb8758d5510699b6295bff42a9e81adf0ce5e6c2d6ac4ebea2ea\"" Jul 7 06:07:32.643789 containerd[1436]: time="2025-07-07T06:07:32.643751201Z" level=info msg="StartContainer for \"256bac112162eb8758d5510699b6295bff42a9e81adf0ce5e6c2d6ac4ebea2ea\"" Jul 7 06:07:32.646592 containerd[1436]: time="2025-07-07T06:07:32.646542275Z" level=info msg="CreateContainer within sandbox \"e9a33816a3c7049e04c925d14d508d9faab0c3502f677b1c8ddcc5e2c3b547e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7084e76b5bcc08c9d8d65a8bf85d9130a88b0a739c8b35ff6f8253b4d9718926\"" Jul 7 06:07:32.646984 containerd[1436]: time="2025-07-07T06:07:32.646963613Z" level=info msg="StartContainer for \"7084e76b5bcc08c9d8d65a8bf85d9130a88b0a739c8b35ff6f8253b4d9718926\"" Jul 7 06:07:32.669833 systemd[1]: Started cri-containerd-256bac112162eb8758d5510699b6295bff42a9e81adf0ce5e6c2d6ac4ebea2ea.scope - libcontainer container 256bac112162eb8758d5510699b6295bff42a9e81adf0ce5e6c2d6ac4ebea2ea. Jul 7 06:07:32.670754 systemd[1]: Started cri-containerd-7d78b4e786fa9f94362bcce53934ff3645ed9a619205d5773f1d80ddf4959db6.scope - libcontainer container 7d78b4e786fa9f94362bcce53934ff3645ed9a619205d5773f1d80ddf4959db6. Jul 7 06:07:32.672927 systemd[1]: Started cri-containerd-7084e76b5bcc08c9d8d65a8bf85d9130a88b0a739c8b35ff6f8253b4d9718926.scope - libcontainer container 7084e76b5bcc08c9d8d65a8bf85d9130a88b0a739c8b35ff6f8253b4d9718926. Jul 7 06:07:32.707424 containerd[1436]: time="2025-07-07T06:07:32.707374414Z" level=info msg="StartContainer for \"256bac112162eb8758d5510699b6295bff42a9e81adf0ce5e6c2d6ac4ebea2ea\" returns successfully" Jul 7 06:07:32.711580 containerd[1436]: time="2025-07-07T06:07:32.711538767Z" level=info msg="StartContainer for \"7d78b4e786fa9f94362bcce53934ff3645ed9a619205d5773f1d80ddf4959db6\" returns successfully" Jul 7 06:07:32.726140 containerd[1436]: time="2025-07-07T06:07:32.726045454Z" level=info msg="StartContainer for \"7084e76b5bcc08c9d8d65a8bf85d9130a88b0a739c8b35ff6f8253b4d9718926\" returns successfully" Jul 7 06:07:32.849610 kubelet[2096]: E0707 06:07:32.848072 2096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="1.6s" Jul 7 06:07:32.870983 kubelet[2096]: E0707 06:07:32.870860 2096 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:07:33.011185 kubelet[2096]: I0707 06:07:33.011154 2096 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:07:33.471707 kubelet[2096]: E0707 06:07:33.469900 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:33.471707 kubelet[2096]: E0707 06:07:33.470045 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:33.471707 kubelet[2096]: E0707 06:07:33.470788 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:33.471707 kubelet[2096]: E0707 06:07:33.470880 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:33.472082 kubelet[2096]: E0707 06:07:33.471992 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:33.472186 kubelet[2096]: E0707 06:07:33.472118 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:34.466840 kubelet[2096]: E0707 06:07:34.466806 2096 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:07:34.475257 kubelet[2096]: E0707 06:07:34.474999 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:34.475257 kubelet[2096]: E0707 06:07:34.475095 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:34.475257 kubelet[2096]: E0707 06:07:34.475116 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:34.475257 kubelet[2096]: E0707 06:07:34.475195 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:34.475615 kubelet[2096]: E0707 06:07:34.475352 2096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:07:34.475615 kubelet[2096]: E0707 06:07:34.475452 2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:34.541241 kubelet[2096]: I0707 06:07:34.541054 2096 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:07:34.541241 kubelet[2096]: E0707 06:07:34.541092 2096 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:07:34.550329 kubelet[2096]: E0707 06:07:34.550293 2096 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:07:34.651080 kubelet[2096]: E0707 06:07:34.651051 2096 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:07:34.744653 kubelet[2096]: I0707 06:07:34.744559 2096 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:34.749238 kubelet[2096]: E0707 06:07:34.749211 2096 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:34.749238 kubelet[2096]: I0707 06:07:34.749236 2096 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:34.750689 kubelet[2096]: E0707 06:07:34.750656 2096 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:34.750739 kubelet[2096]: I0707 06:07:34.750692 2096 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:07:34.752293 kubelet[2096]: E0707 06:07:34.752265 2096 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 06:07:35.435055 kubelet[2096]: I0707 06:07:35.434904 2096 apiserver.go:52] "Watching apiserver" Jul 7 06:07:35.443439 kubelet[2096]: I0707 06:07:35.443404 2096 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:07:36.373652 systemd[1]: Reloading requested from client PID 2387 ('systemctl') (unit session-7.scope)... Jul 7 06:07:36.373750 systemd[1]: Reloading... Jul 7 06:07:36.440741 zram_generator::config[2427]: No configuration found. Jul 7 06:07:36.527118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:07:36.592725 systemd[1]: Reloading finished in 218 ms. Jul 7 06:07:36.623011 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:36.659931 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:07:36.660182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:36.660254 systemd[1]: kubelet.service: Consumed 1.889s CPU time, 130.4M memory peak, 0B memory swap peak. Jul 7 06:07:36.667884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:36.768471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:36.772134 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:07:36.811976 kubelet[2468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:07:36.811976 kubelet[2468]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:07:36.811976 kubelet[2468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:07:36.811976 kubelet[2468]: I0707 06:07:36.811782 2468 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:07:36.820808 kubelet[2468]: I0707 06:07:36.819783 2468 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:07:36.820808 kubelet[2468]: I0707 06:07:36.819805 2468 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:07:36.820808 kubelet[2468]: I0707 06:07:36.820210 2468 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:07:36.824205 kubelet[2468]: I0707 06:07:36.824182 2468 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 06:07:36.826508 kubelet[2468]: I0707 06:07:36.826476 2468 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:07:36.829170 kubelet[2468]: E0707 06:07:36.829137 2468 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:07:36.829170 kubelet[2468]: I0707 06:07:36.829170 2468 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:07:36.831699 kubelet[2468]: I0707 06:07:36.831608 2468 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:07:36.831872 kubelet[2468]: I0707 06:07:36.831837 2468 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:07:36.832025 kubelet[2468]: I0707 06:07:36.831860 2468 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:07:36.832025 kubelet[2468]: I0707 06:07:36.832017 2468 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:07:36.832025 kubelet[2468]: I0707 06:07:36.832025 2468 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:07:36.832165 kubelet[2468]: I0707 06:07:36.832064 2468 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:07:36.832326 kubelet[2468]: I0707 06:07:36.832213 2468 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:07:36.832326 kubelet[2468]: I0707 06:07:36.832235 2468 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:07:36.832326 kubelet[2468]: I0707 06:07:36.832258 2468 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:07:36.832326 kubelet[2468]: I0707 06:07:36.832270 2468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:07:36.834159 kubelet[2468]: I0707 06:07:36.833288 2468 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:07:36.839318 kubelet[2468]: I0707 06:07:36.837999 2468 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:07:36.841813 kubelet[2468]: I0707 06:07:36.841795 2468 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:07:36.843476 kubelet[2468]: I0707 06:07:36.841940 2468 server.go:1289] "Started kubelet" Jul 7 06:07:36.843476 kubelet[2468]: I0707 06:07:36.843332 2468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:07:36.843816 kubelet[2468]: I0707 06:07:36.843791 2468 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:07:36.844692 kubelet[2468]: I0707 06:07:36.844453 2468 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:07:36.844825 kubelet[2468]: I0707 06:07:36.844807 2468 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:07:36.844980 kubelet[2468]: I0707 06:07:36.844959 2468 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:07:36.845535 kubelet[2468]: I0707 06:07:36.845492 2468 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:07:36.849742 kubelet[2468]: I0707 06:07:36.849585 2468 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:07:36.852367 kubelet[2468]: I0707 06:07:36.852325 2468 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:07:36.852486 kubelet[2468]: E0707 06:07:36.852466 2468 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:07:36.854127 kubelet[2468]: I0707 06:07:36.854109 2468 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:07:36.854430 kubelet[2468]: I0707 06:07:36.854406 2468 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:07:36.854965 kubelet[2468]: I0707 06:07:36.854940 2468 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:07:36.855659 kubelet[2468]: I0707 06:07:36.855626 2468 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:07:36.855981 kubelet[2468]: E0707 06:07:36.855951 2468 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:07:36.860926 kubelet[2468]: I0707 06:07:36.860886 2468 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:07:36.862222 kubelet[2468]: I0707 06:07:36.862183 2468 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:07:36.862222 kubelet[2468]: I0707 06:07:36.862208 2468 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:07:36.862222 kubelet[2468]: I0707 06:07:36.862224 2468 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:07:36.862222 kubelet[2468]: I0707 06:07:36.862230 2468 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:07:36.862354 kubelet[2468]: E0707 06:07:36.862267 2468 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:07:36.887250 kubelet[2468]: I0707 06:07:36.887220 2468 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:07:36.887250 kubelet[2468]: I0707 06:07:36.887241 2468 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:07:36.887371 kubelet[2468]: I0707 06:07:36.887262 2468 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:07:36.887405 kubelet[2468]: I0707 06:07:36.887380 2468 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:07:36.887405 kubelet[2468]: I0707 06:07:36.887391 2468 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:07:36.887405 kubelet[2468]: I0707 06:07:36.887406 2468 policy_none.go:49] "None policy: Start" Jul 7 06:07:36.887469 kubelet[2468]: I0707 06:07:36.887415 2468 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:07:36.887469 kubelet[2468]: I0707 06:07:36.887424 2468 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:07:36.887527 kubelet[2468]: I0707 06:07:36.887510 2468 state_mem.go:75] "Updated machine memory state" Jul 7 06:07:36.891627 kubelet[2468]: E0707 06:07:36.891173 2468 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:07:36.891627 kubelet[2468]: I0707 06:07:36.891344 2468 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:07:36.891627 kubelet[2468]: I0707 06:07:36.891355 2468 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:07:36.891627 kubelet[2468]: I0707 06:07:36.891543 2468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:07:36.893530 kubelet[2468]: E0707 06:07:36.893495 2468 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:07:36.963168 kubelet[2468]: I0707 06:07:36.963132 2468 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:36.963490 kubelet[2468]: I0707 06:07:36.963176 2468 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:07:36.963490 kubelet[2468]: I0707 06:07:36.963241 2468 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:36.996594 kubelet[2468]: I0707 06:07:36.996511 2468 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:07:37.002745 kubelet[2468]: I0707 06:07:37.002686 2468 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 06:07:37.003218 kubelet[2468]: I0707 06:07:37.003196 2468 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:07:37.056308 kubelet[2468]: I0707 06:07:37.056099 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a0bf62a34fd064be7f9b86901c2c279-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a0bf62a34fd064be7f9b86901c2c279\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:37.056308 kubelet[2468]: I0707 06:07:37.056143 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a0bf62a34fd064be7f9b86901c2c279-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a0bf62a34fd064be7f9b86901c2c279\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:37.056308 kubelet[2468]: I0707 06:07:37.056163 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:37.056308 kubelet[2468]: I0707 06:07:37.056179 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:37.056308 kubelet[2468]: I0707 06:07:37.056194 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:37.056485 kubelet[2468]: I0707 06:07:37.056209 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:37.056485 kubelet[2468]: I0707 06:07:37.056222 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:07:37.056485 kubelet[2468]: I0707 06:07:37.056240 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a0bf62a34fd064be7f9b86901c2c279-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a0bf62a34fd064be7f9b86901c2c279\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:37.056485 kubelet[2468]: I0707 06:07:37.056255 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:37.273410 kubelet[2468]: E0707 06:07:37.273183 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:37.273410 kubelet[2468]: E0707 06:07:37.273264 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:37.273410 kubelet[2468]: E0707 06:07:37.273356 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:37.833002 kubelet[2468]: I0707 06:07:37.832735 2468 apiserver.go:52] "Watching apiserver" Jul 7 06:07:37.855387 kubelet[2468]: I0707 06:07:37.855339 2468 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:07:37.876904 kubelet[2468]: I0707 06:07:37.876774 2468 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:07:37.877613 kubelet[2468]: I0707 06:07:37.877224 2468 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:37.877613 kubelet[2468]: I0707 06:07:37.877474 2468 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:37.888636 kubelet[2468]: I0707 06:07:37.888556 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.888542609 podStartE2EDuration="1.888542609s" podCreationTimestamp="2025-07-07 06:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:07:37.877171608 +0000 UTC m=+1.101783711" watchObservedRunningTime="2025-07-07 06:07:37.888542609 +0000 UTC m=+1.113154632" Jul 7 06:07:37.889352 kubelet[2468]: E0707 06:07:37.889327 2468 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:07:37.889430 kubelet[2468]: E0707 06:07:37.889411 2468 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:07:37.889659 kubelet[2468]: E0707 06:07:37.889482 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:37.889659 kubelet[2468]: E0707 06:07:37.889540 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:37.889659 kubelet[2468]: E0707 06:07:37.889600 2468 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:07:37.890326 kubelet[2468]: E0707 06:07:37.890244 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:37.901732 kubelet[2468]: I0707 06:07:37.901650 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.901634508 podStartE2EDuration="1.901634508s" podCreationTimestamp="2025-07-07 06:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:07:37.889882228 +0000 UTC m=+1.114494251" watchObservedRunningTime="2025-07-07 06:07:37.901634508 +0000 UTC m=+1.126246531" Jul 7 06:07:37.914549 kubelet[2468]: I0707 06:07:37.914498 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.914475753 podStartE2EDuration="1.914475753s" podCreationTimestamp="2025-07-07 06:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:07:37.902041585 +0000 UTC m=+1.126653608" watchObservedRunningTime="2025-07-07 06:07:37.914475753 +0000 UTC m=+1.139087776" Jul 7 06:07:38.878831 kubelet[2468]: E0707 06:07:38.878469 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:38.878831 kubelet[2468]: E0707 06:07:38.878569 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:38.878831 kubelet[2468]: E0707 06:07:38.878824 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:41.374717 kubelet[2468]: E0707 06:07:41.374656 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:42.983176 kubelet[2468]: E0707 06:07:42.982967 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:43.884935 kubelet[2468]: E0707 06:07:43.884856 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:44.673505 kubelet[2468]: I0707 06:07:44.673474 2468 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:07:44.674167 containerd[1436]: time="2025-07-07T06:07:44.674077677Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:07:44.674448 kubelet[2468]: I0707 06:07:44.674258 2468 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:07:44.886384 kubelet[2468]: E0707 06:07:44.886348 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:45.449613 systemd[1]: Created slice kubepods-besteffort-podea332f3b_9358_4aa4_9d9d_640096500751.slice - libcontainer container kubepods-besteffort-podea332f3b_9358_4aa4_9d9d_640096500751.slice. Jul 7 06:07:45.510776 kubelet[2468]: I0707 06:07:45.510703 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khb9\" (UniqueName: \"kubernetes.io/projected/ea332f3b-9358-4aa4-9d9d-640096500751-kube-api-access-5khb9\") pod \"kube-proxy-qbwgh\" (UID: \"ea332f3b-9358-4aa4-9d9d-640096500751\") " pod="kube-system/kube-proxy-qbwgh" Jul 7 06:07:45.510776 kubelet[2468]: I0707 06:07:45.510747 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ea332f3b-9358-4aa4-9d9d-640096500751-kube-proxy\") pod \"kube-proxy-qbwgh\" (UID: \"ea332f3b-9358-4aa4-9d9d-640096500751\") " pod="kube-system/kube-proxy-qbwgh" Jul 7 06:07:45.510776 kubelet[2468]: I0707 06:07:45.510773 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea332f3b-9358-4aa4-9d9d-640096500751-xtables-lock\") pod \"kube-proxy-qbwgh\" (UID: \"ea332f3b-9358-4aa4-9d9d-640096500751\") " pod="kube-system/kube-proxy-qbwgh" Jul 7 06:07:45.510776 kubelet[2468]: I0707 06:07:45.510788 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea332f3b-9358-4aa4-9d9d-640096500751-lib-modules\") pod \"kube-proxy-qbwgh\" (UID: \"ea332f3b-9358-4aa4-9d9d-640096500751\") " pod="kube-system/kube-proxy-qbwgh" Jul 7 06:07:45.660526 systemd[1]: Created slice kubepods-besteffort-poda5c3fe16_1136_4afe_bd50_a6dca2c8c3f7.slice - libcontainer container kubepods-besteffort-poda5c3fe16_1136_4afe_bd50_a6dca2c8c3f7.slice. Jul 7 06:07:45.712105 kubelet[2468]: I0707 06:07:45.711997 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxvd\" (UniqueName: \"kubernetes.io/projected/a5c3fe16-1136-4afe-bd50-a6dca2c8c3f7-kube-api-access-7nxvd\") pod \"tigera-operator-747864d56d-qsc5w\" (UID: \"a5c3fe16-1136-4afe-bd50-a6dca2c8c3f7\") " pod="tigera-operator/tigera-operator-747864d56d-qsc5w" Jul 7 06:07:45.712105 kubelet[2468]: I0707 06:07:45.712036 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5c3fe16-1136-4afe-bd50-a6dca2c8c3f7-var-lib-calico\") pod \"tigera-operator-747864d56d-qsc5w\" (UID: \"a5c3fe16-1136-4afe-bd50-a6dca2c8c3f7\") " pod="tigera-operator/tigera-operator-747864d56d-qsc5w" Jul 7 06:07:45.762658 kubelet[2468]: E0707 06:07:45.762180 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:45.763464 containerd[1436]: time="2025-07-07T06:07:45.762979933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbwgh,Uid:ea332f3b-9358-4aa4-9d9d-640096500751,Namespace:kube-system,Attempt:0,}" Jul 7 06:07:45.780395 containerd[1436]: time="2025-07-07T06:07:45.780297043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:07:45.780395 containerd[1436]: time="2025-07-07T06:07:45.780352439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:07:45.780395 containerd[1436]: time="2025-07-07T06:07:45.780363519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:45.780593 containerd[1436]: time="2025-07-07T06:07:45.780434394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:45.803827 systemd[1]: Started cri-containerd-e7e91619d0b842d58f593440d753860a0c5ed95251cbcf30817429feeca6b112.scope - libcontainer container e7e91619d0b842d58f593440d753860a0c5ed95251cbcf30817429feeca6b112. Jul 7 06:07:45.822498 containerd[1436]: time="2025-07-07T06:07:45.822458309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbwgh,Uid:ea332f3b-9358-4aa4-9d9d-640096500751,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7e91619d0b842d58f593440d753860a0c5ed95251cbcf30817429feeca6b112\"" Jul 7 06:07:45.823407 kubelet[2468]: E0707 06:07:45.823370 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:45.827353 containerd[1436]: time="2025-07-07T06:07:45.827300524Z" level=info msg="CreateContainer within sandbox \"e7e91619d0b842d58f593440d753860a0c5ed95251cbcf30817429feeca6b112\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:07:45.839481 containerd[1436]: time="2025-07-07T06:07:45.839411442Z" level=info msg="CreateContainer within sandbox \"e7e91619d0b842d58f593440d753860a0c5ed95251cbcf30817429feeca6b112\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0908986f1b3b076454c434621acedc950c5e03daf2e6ee8df2959e097552fe6a\"" Jul 7 06:07:45.839977 containerd[1436]: time="2025-07-07T06:07:45.839951368Z" level=info msg="StartContainer for \"0908986f1b3b076454c434621acedc950c5e03daf2e6ee8df2959e097552fe6a\"" Jul 7 06:07:45.862832 systemd[1]: Started cri-containerd-0908986f1b3b076454c434621acedc950c5e03daf2e6ee8df2959e097552fe6a.scope - libcontainer container 0908986f1b3b076454c434621acedc950c5e03daf2e6ee8df2959e097552fe6a. Jul 7 06:07:45.889527 containerd[1436]: time="2025-07-07T06:07:45.889464572Z" level=info msg="StartContainer for \"0908986f1b3b076454c434621acedc950c5e03daf2e6ee8df2959e097552fe6a\" returns successfully" Jul 7 06:07:45.892794 kubelet[2468]: E0707 06:07:45.892764 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:45.908932 kubelet[2468]: I0707 06:07:45.908772 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbwgh" podStartSLOduration=0.908758077 podStartE2EDuration="908.758077ms" podCreationTimestamp="2025-07-07 06:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:07:45.908607127 +0000 UTC m=+9.133219150" watchObservedRunningTime="2025-07-07 06:07:45.908758077 +0000 UTC m=+9.133370100" Jul 7 06:07:45.966486 containerd[1436]: time="2025-07-07T06:07:45.966053951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-qsc5w,Uid:a5c3fe16-1136-4afe-bd50-a6dca2c8c3f7,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:07:45.989596 containerd[1436]: time="2025-07-07T06:07:45.989334406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:07:45.989596 containerd[1436]: time="2025-07-07T06:07:45.989391282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:07:45.989596 containerd[1436]: time="2025-07-07T06:07:45.989406961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:45.989596 containerd[1436]: time="2025-07-07T06:07:45.989493156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:46.004858 systemd[1]: Started cri-containerd-4c51c92cc3655ed70f8f0fa9e799adc4ac17177def13fd47c28d96e884cf789c.scope - libcontainer container 4c51c92cc3655ed70f8f0fa9e799adc4ac17177def13fd47c28d96e884cf789c. Jul 7 06:07:46.034855 containerd[1436]: time="2025-07-07T06:07:46.034811878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-qsc5w,Uid:a5c3fe16-1136-4afe-bd50-a6dca2c8c3f7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4c51c92cc3655ed70f8f0fa9e799adc4ac17177def13fd47c28d96e884cf789c\"" Jul 7 06:07:46.036394 containerd[1436]: time="2025-07-07T06:07:46.036331188Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:07:46.102331 kubelet[2468]: E0707 06:07:46.102204 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:46.895756 kubelet[2468]: E0707 06:07:46.895726 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:47.280761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182326216.mount: Deactivated successfully. Jul 7 06:07:48.055092 containerd[1436]: time="2025-07-07T06:07:48.055047508Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:48.055682 containerd[1436]: time="2025-07-07T06:07:48.055628278Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 06:07:48.056570 containerd[1436]: time="2025-07-07T06:07:48.056516112Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:48.058748 containerd[1436]: time="2025-07-07T06:07:48.058720838Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:48.059474 containerd[1436]: time="2025-07-07T06:07:48.059445080Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.023078574s" Jul 7 06:07:48.059516 containerd[1436]: time="2025-07-07T06:07:48.059480318Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 06:07:48.063099 containerd[1436]: time="2025-07-07T06:07:48.063067892Z" level=info msg="CreateContainer within sandbox \"4c51c92cc3655ed70f8f0fa9e799adc4ac17177def13fd47c28d96e884cf789c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:07:48.073924 containerd[1436]: time="2025-07-07T06:07:48.073858733Z" level=info msg="CreateContainer within sandbox \"4c51c92cc3655ed70f8f0fa9e799adc4ac17177def13fd47c28d96e884cf789c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1e974e1eda5797905396a8f3817797fbeff8d71e8d2d45a46714ab1d9ebf0b4d\"" Jul 7 06:07:48.074255 containerd[1436]: time="2025-07-07T06:07:48.074229193Z" level=info msg="StartContainer for \"1e974e1eda5797905396a8f3817797fbeff8d71e8d2d45a46714ab1d9ebf0b4d\"" Jul 7 06:07:48.103864 systemd[1]: Started cri-containerd-1e974e1eda5797905396a8f3817797fbeff8d71e8d2d45a46714ab1d9ebf0b4d.scope - libcontainer container 1e974e1eda5797905396a8f3817797fbeff8d71e8d2d45a46714ab1d9ebf0b4d. Jul 7 06:07:48.158013 containerd[1436]: time="2025-07-07T06:07:48.157961971Z" level=info msg="StartContainer for \"1e974e1eda5797905396a8f3817797fbeff8d71e8d2d45a46714ab1d9ebf0b4d\" returns successfully" Jul 7 06:07:48.908122 kubelet[2468]: I0707 06:07:48.907941 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-qsc5w" podStartSLOduration=1.883767442 podStartE2EDuration="3.907925597s" podCreationTimestamp="2025-07-07 06:07:45 +0000 UTC" firstStartedPulling="2025-07-07 06:07:46.035950971 +0000 UTC m=+9.260562994" lastFinishedPulling="2025-07-07 06:07:48.060109126 +0000 UTC m=+11.284721149" observedRunningTime="2025-07-07 06:07:48.907551857 +0000 UTC m=+12.132163880" watchObservedRunningTime="2025-07-07 06:07:48.907925597 +0000 UTC m=+12.132537620" Jul 7 06:07:51.386925 kubelet[2468]: E0707 06:07:51.386854 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:51.910606 kubelet[2468]: E0707 06:07:51.910202 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:52.139118 update_engine[1428]: I20250707 06:07:52.139024 1428 update_attempter.cc:509] Updating boot flags... Jul 7 06:07:52.187692 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2863) Jul 7 06:07:52.241725 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2861) Jul 7 06:07:52.266039 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2861) Jul 7 06:07:53.450290 sudo[1615]: pam_unix(sudo:session): session closed for user root Jul 7 06:07:53.456056 sshd[1612]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:53.463282 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:40314.service: Deactivated successfully. Jul 7 06:07:53.465744 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:07:53.467770 systemd[1]: session-7.scope: Consumed 6.178s CPU time, 156.9M memory peak, 0B memory swap peak. Jul 7 06:07:53.469604 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:07:53.473537 systemd-logind[1424]: Removed session 7. Jul 7 06:07:57.630806 systemd[1]: Created slice kubepods-besteffort-pod694d3f3d_74c8_424f_a163_837e6e827dbb.slice - libcontainer container kubepods-besteffort-pod694d3f3d_74c8_424f_a163_837e6e827dbb.slice. Jul 7 06:07:57.697594 kubelet[2468]: I0707 06:07:57.697551 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2l9\" (UniqueName: \"kubernetes.io/projected/694d3f3d-74c8-424f-a163-837e6e827dbb-kube-api-access-jw2l9\") pod \"calico-typha-55cdc678b8-mdl8m\" (UID: \"694d3f3d-74c8-424f-a163-837e6e827dbb\") " pod="calico-system/calico-typha-55cdc678b8-mdl8m" Jul 7 06:07:57.697594 kubelet[2468]: I0707 06:07:57.697596 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/694d3f3d-74c8-424f-a163-837e6e827dbb-typha-certs\") pod \"calico-typha-55cdc678b8-mdl8m\" (UID: \"694d3f3d-74c8-424f-a163-837e6e827dbb\") " pod="calico-system/calico-typha-55cdc678b8-mdl8m" Jul 7 06:07:57.698095 kubelet[2468]: I0707 06:07:57.697626 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/694d3f3d-74c8-424f-a163-837e6e827dbb-tigera-ca-bundle\") pod \"calico-typha-55cdc678b8-mdl8m\" (UID: \"694d3f3d-74c8-424f-a163-837e6e827dbb\") " pod="calico-system/calico-typha-55cdc678b8-mdl8m" Jul 7 06:07:57.934313 kubelet[2468]: E0707 06:07:57.934284 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:57.935406 containerd[1436]: time="2025-07-07T06:07:57.934958025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55cdc678b8-mdl8m,Uid:694d3f3d-74c8-424f-a163-837e6e827dbb,Namespace:calico-system,Attempt:0,}" Jul 7 06:07:57.956410 containerd[1436]: time="2025-07-07T06:07:57.956297446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:07:57.956410 containerd[1436]: time="2025-07-07T06:07:57.956377523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:07:57.956599 containerd[1436]: time="2025-07-07T06:07:57.956412962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:57.956599 containerd[1436]: time="2025-07-07T06:07:57.956556318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:57.985154 systemd[1]: Started cri-containerd-45dfca7f415684f28ab566c9f7ce8c21d9bf389da401a1b9c685e850b0a0a95c.scope - libcontainer container 45dfca7f415684f28ab566c9f7ce8c21d9bf389da401a1b9c685e850b0a0a95c. Jul 7 06:07:58.001125 kubelet[2468]: I0707 06:07:58.000556 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a892ee0a-5f57-4033-86ec-3af55b70c347-node-certs\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001125 kubelet[2468]: I0707 06:07:58.000594 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-var-run-calico\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001125 kubelet[2468]: I0707 06:07:58.000611 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-cni-net-dir\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001125 kubelet[2468]: I0707 06:07:58.000627 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-lib-modules\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001125 kubelet[2468]: I0707 06:07:58.000641 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb982\" (UniqueName: \"kubernetes.io/projected/a892ee0a-5f57-4033-86ec-3af55b70c347-kube-api-access-qb982\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.000959 systemd[1]: Created slice kubepods-besteffort-poda892ee0a_5f57_4033_86ec_3af55b70c347.slice - libcontainer container kubepods-besteffort-poda892ee0a_5f57_4033_86ec_3af55b70c347.slice. Jul 7 06:07:58.001518 kubelet[2468]: I0707 06:07:58.000659 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-xtables-lock\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001518 kubelet[2468]: I0707 06:07:58.000706 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-policysync\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001518 kubelet[2468]: I0707 06:07:58.000732 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a892ee0a-5f57-4033-86ec-3af55b70c347-tigera-ca-bundle\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001518 kubelet[2468]: I0707 06:07:58.000751 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-cni-bin-dir\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001518 kubelet[2468]: I0707 06:07:58.000773 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-flexvol-driver-host\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001660 kubelet[2468]: I0707 06:07:58.000790 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-cni-log-dir\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.001660 kubelet[2468]: I0707 06:07:58.000804 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a892ee0a-5f57-4033-86ec-3af55b70c347-var-lib-calico\") pod \"calico-node-7xpvr\" (UID: \"a892ee0a-5f57-4033-86ec-3af55b70c347\") " pod="calico-system/calico-node-7xpvr" Jul 7 06:07:58.036318 containerd[1436]: time="2025-07-07T06:07:58.036270188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55cdc678b8-mdl8m,Uid:694d3f3d-74c8-424f-a163-837e6e827dbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"45dfca7f415684f28ab566c9f7ce8c21d9bf389da401a1b9c685e850b0a0a95c\"" Jul 7 06:07:58.037389 kubelet[2468]: E0707 06:07:58.037360 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:07:58.038735 containerd[1436]: time="2025-07-07T06:07:58.038707242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:07:58.103191 kubelet[2468]: E0707 06:07:58.103077 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.103191 kubelet[2468]: W0707 06:07:58.103102 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.103191 kubelet[2468]: E0707 06:07:58.103158 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.107746 kubelet[2468]: E0707 06:07:58.107724 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.108398 kubelet[2468]: W0707 06:07:58.107785 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.108398 kubelet[2468]: E0707 06:07:58.108121 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.112575 kubelet[2468]: E0707 06:07:58.112514 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.112575 kubelet[2468]: W0707 06:07:58.112529 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.112575 kubelet[2468]: E0707 06:07:58.112542 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.288999 kubelet[2468]: E0707 06:07:58.287304 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwndf" podUID="b4408009-2167-498a-9cbf-5110d3e01355" Jul 7 06:07:58.294738 kubelet[2468]: E0707 06:07:58.294659 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.295659 kubelet[2468]: W0707 06:07:58.294902 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.295659 kubelet[2468]: E0707 06:07:58.294933 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.296068 kubelet[2468]: E0707 06:07:58.295919 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.303251 kubelet[2468]: W0707 06:07:58.295939 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.303251 kubelet[2468]: E0707 06:07:58.303139 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.303437 kubelet[2468]: E0707 06:07:58.303414 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.303469 kubelet[2468]: W0707 06:07:58.303432 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.303469 kubelet[2468]: E0707 06:07:58.303453 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.303699 kubelet[2468]: E0707 06:07:58.303678 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.303738 kubelet[2468]: W0707 06:07:58.303702 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.303738 kubelet[2468]: E0707 06:07:58.303714 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.303930 kubelet[2468]: E0707 06:07:58.303912 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.303930 kubelet[2468]: W0707 06:07:58.303927 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.303977 kubelet[2468]: E0707 06:07:58.303938 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.304106 kubelet[2468]: E0707 06:07:58.304090 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.304106 kubelet[2468]: W0707 06:07:58.304105 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.304306 kubelet[2468]: E0707 06:07:58.304114 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.304306 kubelet[2468]: E0707 06:07:58.304250 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.304306 kubelet[2468]: W0707 06:07:58.304259 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.304306 kubelet[2468]: E0707 06:07:58.304277 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.304591 kubelet[2468]: E0707 06:07:58.304424 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.304591 kubelet[2468]: W0707 06:07:58.304444 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.304591 kubelet[2468]: E0707 06:07:58.304457 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.304919 kubelet[2468]: E0707 06:07:58.304862 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.304919 kubelet[2468]: W0707 06:07:58.304887 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.304919 kubelet[2468]: E0707 06:07:58.304902 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.305309 kubelet[2468]: E0707 06:07:58.305287 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.305351 kubelet[2468]: W0707 06:07:58.305316 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.305351 kubelet[2468]: E0707 06:07:58.305328 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.309529 kubelet[2468]: E0707 06:07:58.309494 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.309529 kubelet[2468]: W0707 06:07:58.309514 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.309529 kubelet[2468]: E0707 06:07:58.309529 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.310904 kubelet[2468]: E0707 06:07:58.310319 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.310904 kubelet[2468]: W0707 06:07:58.310334 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.310904 kubelet[2468]: E0707 06:07:58.310347 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.311027 kubelet[2468]: E0707 06:07:58.310965 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.311027 kubelet[2468]: W0707 06:07:58.310982 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.311027 kubelet[2468]: E0707 06:07:58.310997 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.311456 kubelet[2468]: E0707 06:07:58.311433 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.311456 kubelet[2468]: W0707 06:07:58.311452 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.311529 kubelet[2468]: E0707 06:07:58.311466 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.311694 kubelet[2468]: E0707 06:07:58.311662 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.311694 kubelet[2468]: W0707 06:07:58.311688 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.311744 kubelet[2468]: E0707 06:07:58.311697 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.311857 kubelet[2468]: E0707 06:07:58.311842 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.311857 kubelet[2468]: W0707 06:07:58.311852 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.311910 kubelet[2468]: E0707 06:07:58.311860 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.312157 kubelet[2468]: E0707 06:07:58.312136 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.312157 kubelet[2468]: W0707 06:07:58.312152 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.312213 kubelet[2468]: E0707 06:07:58.312165 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.312419 kubelet[2468]: E0707 06:07:58.312334 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.312419 kubelet[2468]: W0707 06:07:58.312346 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.312419 kubelet[2468]: E0707 06:07:58.312354 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.312538 containerd[1436]: time="2025-07-07T06:07:58.312430877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7xpvr,Uid:a892ee0a-5f57-4033-86ec-3af55b70c347,Namespace:calico-system,Attempt:0,}" Jul 7 06:07:58.312572 kubelet[2468]: E0707 06:07:58.312551 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.312572 kubelet[2468]: W0707 06:07:58.312561 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.312572 kubelet[2468]: E0707 06:07:58.312570 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.312796 kubelet[2468]: E0707 06:07:58.312771 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.312892 kubelet[2468]: W0707 06:07:58.312818 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.312892 kubelet[2468]: E0707 06:07:58.312831 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.313138 kubelet[2468]: E0707 06:07:58.313120 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.313138 kubelet[2468]: W0707 06:07:58.313132 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.313138 kubelet[2468]: E0707 06:07:58.313142 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.313251 kubelet[2468]: I0707 06:07:58.313167 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4408009-2167-498a-9cbf-5110d3e01355-registration-dir\") pod \"csi-node-driver-fwndf\" (UID: \"b4408009-2167-498a-9cbf-5110d3e01355\") " pod="calico-system/csi-node-driver-fwndf" Jul 7 06:07:58.313470 kubelet[2468]: E0707 06:07:58.313453 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.313470 kubelet[2468]: W0707 06:07:58.313467 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.313528 kubelet[2468]: E0707 06:07:58.313479 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.313528 kubelet[2468]: I0707 06:07:58.313501 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2cp\" (UniqueName: \"kubernetes.io/projected/b4408009-2167-498a-9cbf-5110d3e01355-kube-api-access-qt2cp\") pod \"csi-node-driver-fwndf\" (UID: \"b4408009-2167-498a-9cbf-5110d3e01355\") " pod="calico-system/csi-node-driver-fwndf" Jul 7 06:07:58.313774 kubelet[2468]: E0707 06:07:58.313760 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.313774 kubelet[2468]: W0707 06:07:58.313772 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.313847 kubelet[2468]: E0707 06:07:58.313783 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.313847 kubelet[2468]: I0707 06:07:58.313804 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4408009-2167-498a-9cbf-5110d3e01355-kubelet-dir\") pod \"csi-node-driver-fwndf\" (UID: \"b4408009-2167-498a-9cbf-5110d3e01355\") " pod="calico-system/csi-node-driver-fwndf" Jul 7 06:07:58.314031 kubelet[2468]: E0707 06:07:58.314016 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.314115 kubelet[2468]: W0707 06:07:58.314031 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.314115 kubelet[2468]: E0707 06:07:58.314043 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.314260 kubelet[2468]: E0707 06:07:58.314223 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.314260 kubelet[2468]: W0707 06:07:58.314233 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.314260 kubelet[2468]: E0707 06:07:58.314242 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.314471 kubelet[2468]: E0707 06:07:58.314456 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.314471 kubelet[2468]: W0707 06:07:58.314470 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.314567 kubelet[2468]: E0707 06:07:58.314480 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.314713 kubelet[2468]: E0707 06:07:58.314698 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.314713 kubelet[2468]: W0707 06:07:58.314712 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.314792 kubelet[2468]: E0707 06:07:58.314721 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.314946 kubelet[2468]: E0707 06:07:58.314912 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.314946 kubelet[2468]: W0707 06:07:58.314930 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.314946 kubelet[2468]: E0707 06:07:58.314940 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.314946 kubelet[2468]: I0707 06:07:58.314967 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4408009-2167-498a-9cbf-5110d3e01355-socket-dir\") pod \"csi-node-driver-fwndf\" (UID: \"b4408009-2167-498a-9cbf-5110d3e01355\") " pod="calico-system/csi-node-driver-fwndf" Jul 7 06:07:58.315472 kubelet[2468]: E0707 06:07:58.315330 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.315472 kubelet[2468]: W0707 06:07:58.315345 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.315472 kubelet[2468]: E0707 06:07:58.315359 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.315708 kubelet[2468]: E0707 06:07:58.315694 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.315781 kubelet[2468]: W0707 06:07:58.315769 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.315889 kubelet[2468]: E0707 06:07:58.315823 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.316249 kubelet[2468]: E0707 06:07:58.316234 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.316441 kubelet[2468]: W0707 06:07:58.316306 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.316441 kubelet[2468]: E0707 06:07:58.316324 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.316441 kubelet[2468]: I0707 06:07:58.316355 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b4408009-2167-498a-9cbf-5110d3e01355-varrun\") pod \"csi-node-driver-fwndf\" (UID: \"b4408009-2167-498a-9cbf-5110d3e01355\") " pod="calico-system/csi-node-driver-fwndf" Jul 7 06:07:58.316740 kubelet[2468]: E0707 06:07:58.316725 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.316862 kubelet[2468]: W0707 06:07:58.316806 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.316943 kubelet[2468]: E0707 06:07:58.316930 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.317241 kubelet[2468]: E0707 06:07:58.317227 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.317411 kubelet[2468]: W0707 06:07:58.317332 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.317411 kubelet[2468]: E0707 06:07:58.317352 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.317815 kubelet[2468]: E0707 06:07:58.317693 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.317815 kubelet[2468]: W0707 06:07:58.317707 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.317815 kubelet[2468]: E0707 06:07:58.317719 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.318249 kubelet[2468]: E0707 06:07:58.318202 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.318249 kubelet[2468]: W0707 06:07:58.318217 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.318249 kubelet[2468]: E0707 06:07:58.318229 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.328913 containerd[1436]: time="2025-07-07T06:07:58.328782552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:07:58.328913 containerd[1436]: time="2025-07-07T06:07:58.328850231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:07:58.328913 containerd[1436]: time="2025-07-07T06:07:58.328865190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:58.329410 containerd[1436]: time="2025-07-07T06:07:58.329373656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:07:58.341890 systemd[1]: Started cri-containerd-b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24.scope - libcontainer container b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24. Jul 7 06:07:58.365321 containerd[1436]: time="2025-07-07T06:07:58.365185282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7xpvr,Uid:a892ee0a-5f57-4033-86ec-3af55b70c347,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24\"" Jul 7 06:07:58.417273 kubelet[2468]: E0707 06:07:58.417244 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.417273 kubelet[2468]: W0707 06:07:58.417267 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.417426 kubelet[2468]: E0707 06:07:58.417288 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.417542 kubelet[2468]: E0707 06:07:58.417530 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.417542 kubelet[2468]: W0707 06:07:58.417542 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.417638 kubelet[2468]: E0707 06:07:58.417551 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.417850 kubelet[2468]: E0707 06:07:58.417838 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.417879 kubelet[2468]: W0707 06:07:58.417850 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.417879 kubelet[2468]: E0707 06:07:58.417860 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.418055 kubelet[2468]: E0707 06:07:58.418045 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.418094 kubelet[2468]: W0707 06:07:58.418055 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.418094 kubelet[2468]: E0707 06:07:58.418064 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.418233 kubelet[2468]: E0707 06:07:58.418223 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.418233 kubelet[2468]: W0707 06:07:58.418232 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.418303 kubelet[2468]: E0707 06:07:58.418241 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.418418 kubelet[2468]: E0707 06:07:58.418407 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.418449 kubelet[2468]: W0707 06:07:58.418418 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.418449 kubelet[2468]: E0707 06:07:58.418426 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.418580 kubelet[2468]: E0707 06:07:58.418569 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.418580 kubelet[2468]: W0707 06:07:58.418579 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.418630 kubelet[2468]: E0707 06:07:58.418587 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.418742 kubelet[2468]: E0707 06:07:58.418732 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.418791 kubelet[2468]: W0707 06:07:58.418750 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.418791 kubelet[2468]: E0707 06:07:58.418760 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.418952 kubelet[2468]: E0707 06:07:58.418942 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.418952 kubelet[2468]: W0707 06:07:58.418952 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.419029 kubelet[2468]: E0707 06:07:58.418960 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.419138 kubelet[2468]: E0707 06:07:58.419128 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.419138 kubelet[2468]: W0707 06:07:58.419138 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.419218 kubelet[2468]: E0707 06:07:58.419146 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.419306 kubelet[2468]: E0707 06:07:58.419294 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.419306 kubelet[2468]: W0707 06:07:58.419306 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.419390 kubelet[2468]: E0707 06:07:58.419314 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.419470 kubelet[2468]: E0707 06:07:58.419454 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.419470 kubelet[2468]: W0707 06:07:58.419462 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.419558 kubelet[2468]: E0707 06:07:58.419472 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.419613 kubelet[2468]: E0707 06:07:58.419600 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.419613 kubelet[2468]: W0707 06:07:58.419610 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.419663 kubelet[2468]: E0707 06:07:58.419618 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.419897 kubelet[2468]: E0707 06:07:58.419885 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.419897 kubelet[2468]: W0707 06:07:58.419896 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.420000 kubelet[2468]: E0707 06:07:58.419905 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.420117 kubelet[2468]: E0707 06:07:58.420103 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.420117 kubelet[2468]: W0707 06:07:58.420113 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.420183 kubelet[2468]: E0707 06:07:58.420121 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.420291 kubelet[2468]: E0707 06:07:58.420281 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.420291 kubelet[2468]: W0707 06:07:58.420291 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.420381 kubelet[2468]: E0707 06:07:58.420299 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.420448 kubelet[2468]: E0707 06:07:58.420438 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.420448 kubelet[2468]: W0707 06:07:58.420448 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.420498 kubelet[2468]: E0707 06:07:58.420455 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.420608 kubelet[2468]: E0707 06:07:58.420598 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.420608 kubelet[2468]: W0707 06:07:58.420608 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.420752 kubelet[2468]: E0707 06:07:58.420617 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.420792 kubelet[2468]: E0707 06:07:58.420785 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.420943 kubelet[2468]: W0707 06:07:58.420793 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.420943 kubelet[2468]: E0707 06:07:58.420801 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.421011 kubelet[2468]: E0707 06:07:58.420991 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.421011 kubelet[2468]: W0707 06:07:58.420999 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.421011 kubelet[2468]: E0707 06:07:58.421006 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.421404 kubelet[2468]: E0707 06:07:58.421386 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.421479 kubelet[2468]: W0707 06:07:58.421465 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.421547 kubelet[2468]: E0707 06:07:58.421537 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.422036 kubelet[2468]: E0707 06:07:58.421919 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.422036 kubelet[2468]: W0707 06:07:58.421934 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.422036 kubelet[2468]: E0707 06:07:58.421944 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.422229 kubelet[2468]: E0707 06:07:58.422205 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.422291 kubelet[2468]: W0707 06:07:58.422280 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.422342 kubelet[2468]: E0707 06:07:58.422332 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.422857 kubelet[2468]: E0707 06:07:58.422723 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.422857 kubelet[2468]: W0707 06:07:58.422736 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.422857 kubelet[2468]: E0707 06:07:58.422755 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.423029 kubelet[2468]: E0707 06:07:58.423015 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.423079 kubelet[2468]: W0707 06:07:58.423068 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.423161 kubelet[2468]: E0707 06:07:58.423149 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.432404 kubelet[2468]: E0707 06:07:58.432386 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:07:58.432404 kubelet[2468]: W0707 06:07:58.432401 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:07:58.432500 kubelet[2468]: E0707 06:07:58.432414 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:07:58.921097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2122827764.mount: Deactivated successfully. Jul 7 06:07:59.297830 containerd[1436]: time="2025-07-07T06:07:59.297777901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:59.298783 containerd[1436]: time="2025-07-07T06:07:59.298702197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 06:07:59.299477 containerd[1436]: time="2025-07-07T06:07:59.299406499Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:59.301755 containerd[1436]: time="2025-07-07T06:07:59.301701961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:59.302499 containerd[1436]: time="2025-07-07T06:07:59.302460662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.263720101s" Jul 7 06:07:59.302499 containerd[1436]: time="2025-07-07T06:07:59.302493581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 06:07:59.303708 containerd[1436]: time="2025-07-07T06:07:59.303504355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:07:59.321708 containerd[1436]: time="2025-07-07T06:07:59.321605373Z" level=info msg="CreateContainer within sandbox \"45dfca7f415684f28ab566c9f7ce8c21d9bf389da401a1b9c685e850b0a0a95c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:07:59.332309 containerd[1436]: time="2025-07-07T06:07:59.332270261Z" level=info msg="CreateContainer within sandbox \"45dfca7f415684f28ab566c9f7ce8c21d9bf389da401a1b9c685e850b0a0a95c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"10bc754d589c347f0ebbcb216d09816e7aea850d6ffb8735da82a29c67b36be7\"" Jul 7 06:07:59.332772 containerd[1436]: time="2025-07-07T06:07:59.332746289Z" level=info msg="StartContainer for \"10bc754d589c347f0ebbcb216d09816e7aea850d6ffb8735da82a29c67b36be7\"" Jul 7 06:07:59.357836 systemd[1]: Started cri-containerd-10bc754d589c347f0ebbcb216d09816e7aea850d6ffb8735da82a29c67b36be7.scope - libcontainer container 10bc754d589c347f0ebbcb216d09816e7aea850d6ffb8735da82a29c67b36be7. Jul 7 06:07:59.389649 containerd[1436]: time="2025-07-07T06:07:59.389537761Z" level=info msg="StartContainer for \"10bc754d589c347f0ebbcb216d09816e7aea850d6ffb8735da82a29c67b36be7\" returns successfully" Jul 7 06:07:59.862768 kubelet[2468]: E0707 06:07:59.862630 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwndf" podUID="b4408009-2167-498a-9cbf-5110d3e01355" Jul 7 06:07:59.967067 kubelet[2468]: E0707 06:07:59.967038 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:00.025844 kubelet[2468]: E0707 06:08:00.025727 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.025844 kubelet[2468]: W0707 06:08:00.025752 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.025844 kubelet[2468]: E0707 06:08:00.025771 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.026037 kubelet[2468]: E0707 06:08:00.025949 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.026037 kubelet[2468]: W0707 06:08:00.025958 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.026037 kubelet[2468]: E0707 06:08:00.025967 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.026152 kubelet[2468]: E0707 06:08:00.026139 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.026152 kubelet[2468]: W0707 06:08:00.026151 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.026209 kubelet[2468]: E0707 06:08:00.026161 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.026329 kubelet[2468]: E0707 06:08:00.026311 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.026329 kubelet[2468]: W0707 06:08:00.026319 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.026329 kubelet[2468]: E0707 06:08:00.026327 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.026538 kubelet[2468]: E0707 06:08:00.026482 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.026538 kubelet[2468]: W0707 06:08:00.026489 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.026538 kubelet[2468]: E0707 06:08:00.026497 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.026636 kubelet[2468]: E0707 06:08:00.026623 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.026636 kubelet[2468]: W0707 06:08:00.026633 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.026733 kubelet[2468]: E0707 06:08:00.026641 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.026806 kubelet[2468]: E0707 06:08:00.026797 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.026806 kubelet[2468]: W0707 06:08:00.026806 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.026873 kubelet[2468]: E0707 06:08:00.026814 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.026968 kubelet[2468]: E0707 06:08:00.026958 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.026968 kubelet[2468]: W0707 06:08:00.026968 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.027031 kubelet[2468]: E0707 06:08:00.026975 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.027130 kubelet[2468]: E0707 06:08:00.027116 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.027130 kubelet[2468]: W0707 06:08:00.027126 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.027199 kubelet[2468]: E0707 06:08:00.027134 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.027272 kubelet[2468]: E0707 06:08:00.027262 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.027272 kubelet[2468]: W0707 06:08:00.027271 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.027331 kubelet[2468]: E0707 06:08:00.027279 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.027409 kubelet[2468]: E0707 06:08:00.027400 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.027409 kubelet[2468]: W0707 06:08:00.027408 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.027469 kubelet[2468]: E0707 06:08:00.027415 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.027548 kubelet[2468]: E0707 06:08:00.027538 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.027548 kubelet[2468]: W0707 06:08:00.027547 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.027611 kubelet[2468]: E0707 06:08:00.027555 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.027732 kubelet[2468]: E0707 06:08:00.027720 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.027732 kubelet[2468]: W0707 06:08:00.027731 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.027800 kubelet[2468]: E0707 06:08:00.027739 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.027979 kubelet[2468]: E0707 06:08:00.027959 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.027979 kubelet[2468]: W0707 06:08:00.027968 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.027979 kubelet[2468]: E0707 06:08:00.027977 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.028143 kubelet[2468]: E0707 06:08:00.028132 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.028143 kubelet[2468]: W0707 06:08:00.028142 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.028207 kubelet[2468]: E0707 06:08:00.028150 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.028421 kubelet[2468]: E0707 06:08:00.028410 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.028421 kubelet[2468]: W0707 06:08:00.028421 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.028499 kubelet[2468]: E0707 06:08:00.028430 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.028604 kubelet[2468]: E0707 06:08:00.028593 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.028604 kubelet[2468]: W0707 06:08:00.028603 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.028661 kubelet[2468]: E0707 06:08:00.028611 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.028816 kubelet[2468]: E0707 06:08:00.028805 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.028816 kubelet[2468]: W0707 06:08:00.028816 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.028881 kubelet[2468]: E0707 06:08:00.028825 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.029061 kubelet[2468]: E0707 06:08:00.029048 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.029061 kubelet[2468]: W0707 06:08:00.029059 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.029126 kubelet[2468]: E0707 06:08:00.029068 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.036637 kubelet[2468]: E0707 06:08:00.036619 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.036637 kubelet[2468]: W0707 06:08:00.036635 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.036807 kubelet[2468]: E0707 06:08:00.036646 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.036852 kubelet[2468]: E0707 06:08:00.036837 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.036852 kubelet[2468]: W0707 06:08:00.036845 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.036938 kubelet[2468]: E0707 06:08:00.036853 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.037054 kubelet[2468]: E0707 06:08:00.037036 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.037054 kubelet[2468]: W0707 06:08:00.037045 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.037054 kubelet[2468]: E0707 06:08:00.037053 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.037234 kubelet[2468]: E0707 06:08:00.037223 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.037234 kubelet[2468]: W0707 06:08:00.037233 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.037294 kubelet[2468]: E0707 06:08:00.037241 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.037416 kubelet[2468]: E0707 06:08:00.037404 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.037416 kubelet[2468]: W0707 06:08:00.037415 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.037482 kubelet[2468]: E0707 06:08:00.037423 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.037789 kubelet[2468]: E0707 06:08:00.037775 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.037789 kubelet[2468]: W0707 06:08:00.037787 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.037871 kubelet[2468]: E0707 06:08:00.037800 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.037983 kubelet[2468]: E0707 06:08:00.037973 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.037983 kubelet[2468]: W0707 06:08:00.037983 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.038043 kubelet[2468]: E0707 06:08:00.037992 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.038147 kubelet[2468]: E0707 06:08:00.038134 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.038147 kubelet[2468]: W0707 06:08:00.038145 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.038195 kubelet[2468]: E0707 06:08:00.038153 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.038318 kubelet[2468]: E0707 06:08:00.038305 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.038318 kubelet[2468]: W0707 06:08:00.038317 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.038380 kubelet[2468]: E0707 06:08:00.038325 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.038619 kubelet[2468]: E0707 06:08:00.038602 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.038619 kubelet[2468]: W0707 06:08:00.038614 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.038708 kubelet[2468]: E0707 06:08:00.038623 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.038820 kubelet[2468]: E0707 06:08:00.038809 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.038820 kubelet[2468]: W0707 06:08:00.038820 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.038884 kubelet[2468]: E0707 06:08:00.038828 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.038998 kubelet[2468]: E0707 06:08:00.038988 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.038998 kubelet[2468]: W0707 06:08:00.038997 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.039057 kubelet[2468]: E0707 06:08:00.039005 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.039485 kubelet[2468]: E0707 06:08:00.039321 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.039485 kubelet[2468]: W0707 06:08:00.039388 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.039485 kubelet[2468]: E0707 06:08:00.039403 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.039806 kubelet[2468]: E0707 06:08:00.039790 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:08:00.039903 kubelet[2468]: W0707 06:08:00.039863 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:08:00.039903 kubelet[2468]: E0707 06:08:00.039880 2468 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:08:00.119011 containerd[1436]: time="2025-07-07T06:08:00.118896791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:00.121008 containerd[1436]: time="2025-07-07T06:08:00.120174640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 06:08:00.121876 containerd[1436]: time="2025-07-07T06:08:00.121836560Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:00.129478 containerd[1436]: time="2025-07-07T06:08:00.129422819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:00.130222 containerd[1436]: time="2025-07-07T06:08:00.130077123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 826.54085ms" Jul 7 06:08:00.130222 containerd[1436]: time="2025-07-07T06:08:00.130113803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 06:08:00.134019 containerd[1436]: time="2025-07-07T06:08:00.133987350Z" level=info msg="CreateContainer within sandbox \"b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:08:00.145767 containerd[1436]: time="2025-07-07T06:08:00.145730389Z" level=info msg="CreateContainer within sandbox \"b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246\"" Jul 7 06:08:00.146273 containerd[1436]: time="2025-07-07T06:08:00.146251217Z" level=info msg="StartContainer for \"b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246\"" Jul 7 06:08:00.173917 systemd[1]: Started cri-containerd-b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246.scope - libcontainer container b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246. Jul 7 06:08:00.197822 containerd[1436]: time="2025-07-07T06:08:00.197742746Z" level=info msg="StartContainer for \"b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246\" returns successfully" Jul 7 06:08:00.217696 systemd[1]: cri-containerd-b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246.scope: Deactivated successfully. Jul 7 06:08:00.259283 containerd[1436]: time="2025-07-07T06:08:00.259191237Z" level=info msg="shim disconnected" id=b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246 namespace=k8s.io Jul 7 06:08:00.259283 containerd[1436]: time="2025-07-07T06:08:00.259250356Z" level=warning msg="cleaning up after shim disconnected" id=b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246 namespace=k8s.io Jul 7 06:08:00.259283 containerd[1436]: time="2025-07-07T06:08:00.259260595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:08:00.805358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b75994967ea85e6266bec45372677897e8ec69789de1651cde251837b6760246-rootfs.mount: Deactivated successfully. Jul 7 06:08:00.970138 kubelet[2468]: I0707 06:08:00.970110 2468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:08:00.970896 kubelet[2468]: E0707 06:08:00.970426 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:00.972965 containerd[1436]: time="2025-07-07T06:08:00.972934215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:08:00.994249 kubelet[2468]: I0707 06:08:00.994029 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55cdc678b8-mdl8m" podStartSLOduration=2.729195879 podStartE2EDuration="3.994013231s" podCreationTimestamp="2025-07-07 06:07:57 +0000 UTC" firstStartedPulling="2025-07-07 06:07:58.03841977 +0000 UTC m=+21.263031793" lastFinishedPulling="2025-07-07 06:07:59.303237162 +0000 UTC m=+22.527849145" observedRunningTime="2025-07-07 06:07:59.979086169 +0000 UTC m=+23.203698192" watchObservedRunningTime="2025-07-07 06:08:00.994013231 +0000 UTC m=+24.218625254" Jul 7 06:08:01.862634 kubelet[2468]: E0707 06:08:01.862577 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwndf" podUID="b4408009-2167-498a-9cbf-5110d3e01355" Jul 7 06:08:03.666590 containerd[1436]: time="2025-07-07T06:08:03.666531096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:03.667467 containerd[1436]: time="2025-07-07T06:08:03.667145324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 06:08:03.668038 containerd[1436]: time="2025-07-07T06:08:03.667990347Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:03.670231 containerd[1436]: time="2025-07-07T06:08:03.669940709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:03.670727 containerd[1436]: time="2025-07-07T06:08:03.670698574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.697725799s" Jul 7 06:08:03.670889 containerd[1436]: time="2025-07-07T06:08:03.670797652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 06:08:03.674027 containerd[1436]: time="2025-07-07T06:08:03.673996229Z" level=info msg="CreateContainer within sandbox \"b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:08:03.687886 containerd[1436]: time="2025-07-07T06:08:03.687844516Z" level=info msg="CreateContainer within sandbox \"b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1\"" Jul 7 06:08:03.688363 containerd[1436]: time="2025-07-07T06:08:03.688343786Z" level=info msg="StartContainer for \"3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1\"" Jul 7 06:08:03.718927 systemd[1]: Started cri-containerd-3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1.scope - libcontainer container 3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1. Jul 7 06:08:03.742790 containerd[1436]: time="2025-07-07T06:08:03.742752195Z" level=info msg="StartContainer for \"3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1\" returns successfully" Jul 7 06:08:03.863717 kubelet[2468]: E0707 06:08:03.863161 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwndf" podUID="b4408009-2167-498a-9cbf-5110d3e01355" Jul 7 06:08:04.371581 systemd[1]: cri-containerd-3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1.scope: Deactivated successfully. Jul 7 06:08:04.394471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1-rootfs.mount: Deactivated successfully. Jul 7 06:08:04.397438 kubelet[2468]: I0707 06:08:04.397266 2468 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:08:04.399898 containerd[1436]: time="2025-07-07T06:08:04.399831304Z" level=info msg="shim disconnected" id=3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1 namespace=k8s.io Jul 7 06:08:04.399898 containerd[1436]: time="2025-07-07T06:08:04.399885583Z" level=warning msg="cleaning up after shim disconnected" id=3d8288dab5a5ec0591f062aa686e1fe3a1b36963f7f3fc8541bdb3402ac485d1 namespace=k8s.io Jul 7 06:08:04.399898 containerd[1436]: time="2025-07-07T06:08:04.399895223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:08:04.443615 systemd[1]: Created slice kubepods-burstable-pod7f20084f_0209_4ec2_bfca_cd55b8ec8924.slice - libcontainer container kubepods-burstable-pod7f20084f_0209_4ec2_bfca_cd55b8ec8924.slice. Jul 7 06:08:04.461084 systemd[1]: Created slice kubepods-besteffort-pod689618a0_74c2_4971_a557_2ab4b585a588.slice - libcontainer container kubepods-besteffort-pod689618a0_74c2_4971_a557_2ab4b585a588.slice. Jul 7 06:08:04.466752 kubelet[2468]: I0707 06:08:04.465912 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/086322d3-4360-4797-8169-4da48ff64f3a-tigera-ca-bundle\") pod \"calico-kube-controllers-695fbfc78b-c6gs2\" (UID: \"086322d3-4360-4797-8169-4da48ff64f3a\") " pod="calico-system/calico-kube-controllers-695fbfc78b-c6gs2" Jul 7 06:08:04.466928 kubelet[2468]: I0707 06:08:04.466907 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8czj\" (UniqueName: \"kubernetes.io/projected/85139026-c5f1-40c8-960a-03a764baad77-kube-api-access-k8czj\") pod \"whisker-5f5d5f8bf8-vtsgr\" (UID: \"85139026-c5f1-40c8-960a-03a764baad77\") " pod="calico-system/whisker-5f5d5f8bf8-vtsgr" Jul 7 06:08:04.467027 kubelet[2468]: I0707 06:08:04.467013 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eb3733b-fa70-428d-847e-d3206b3573f6-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-cfb5m\" (UID: \"9eb3733b-fa70-428d-847e-d3206b3573f6\") " pod="calico-system/goldmane-768f4c5c69-cfb5m" Jul 7 06:08:04.467107 kubelet[2468]: I0707 06:08:04.467093 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/467d15f7-ca34-419b-8788-314a876c49ce-calico-apiserver-certs\") pod \"calico-apiserver-674899bd6d-nhxd2\" (UID: \"467d15f7-ca34-419b-8788-314a876c49ce\") " pod="calico-apiserver/calico-apiserver-674899bd6d-nhxd2" Jul 7 06:08:04.467191 kubelet[2468]: I0707 06:08:04.467175 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb3733b-fa70-428d-847e-d3206b3573f6-config\") pod \"goldmane-768f4c5c69-cfb5m\" (UID: \"9eb3733b-fa70-428d-847e-d3206b3573f6\") " pod="calico-system/goldmane-768f4c5c69-cfb5m" Jul 7 06:08:04.467278 kubelet[2468]: I0707 06:08:04.467263 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgvmz\" (UniqueName: \"kubernetes.io/projected/689618a0-74c2-4971-a557-2ab4b585a588-kube-api-access-lgvmz\") pod \"calico-apiserver-674899bd6d-llcx9\" (UID: \"689618a0-74c2-4971-a557-2ab4b585a588\") " pod="calico-apiserver/calico-apiserver-674899bd6d-llcx9" Jul 7 06:08:04.467363 kubelet[2468]: I0707 06:08:04.467348 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85139026-c5f1-40c8-960a-03a764baad77-whisker-ca-bundle\") pod \"whisker-5f5d5f8bf8-vtsgr\" (UID: \"85139026-c5f1-40c8-960a-03a764baad77\") " pod="calico-system/whisker-5f5d5f8bf8-vtsgr" Jul 7 06:08:04.467451 kubelet[2468]: I0707 06:08:04.467434 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9eb3733b-fa70-428d-847e-d3206b3573f6-goldmane-key-pair\") pod \"goldmane-768f4c5c69-cfb5m\" (UID: \"9eb3733b-fa70-428d-847e-d3206b3573f6\") " pod="calico-system/goldmane-768f4c5c69-cfb5m" Jul 7 06:08:04.467548 kubelet[2468]: I0707 06:08:04.467532 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5tq\" (UniqueName: \"kubernetes.io/projected/086322d3-4360-4797-8169-4da48ff64f3a-kube-api-access-bv5tq\") pod \"calico-kube-controllers-695fbfc78b-c6gs2\" (UID: \"086322d3-4360-4797-8169-4da48ff64f3a\") " pod="calico-system/calico-kube-controllers-695fbfc78b-c6gs2" Jul 7 06:08:04.469479 kubelet[2468]: I0707 06:08:04.469449 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85139026-c5f1-40c8-960a-03a764baad77-whisker-backend-key-pair\") pod \"whisker-5f5d5f8bf8-vtsgr\" (UID: \"85139026-c5f1-40c8-960a-03a764baad77\") " pod="calico-system/whisker-5f5d5f8bf8-vtsgr" Jul 7 06:08:04.469652 kubelet[2468]: I0707 06:08:04.469615 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vcf\" (UniqueName: \"kubernetes.io/projected/9eb3733b-fa70-428d-847e-d3206b3573f6-kube-api-access-r4vcf\") pod \"goldmane-768f4c5c69-cfb5m\" (UID: \"9eb3733b-fa70-428d-847e-d3206b3573f6\") " pod="calico-system/goldmane-768f4c5c69-cfb5m" Jul 7 06:08:04.469712 kubelet[2468]: I0707 06:08:04.469681 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/689618a0-74c2-4971-a557-2ab4b585a588-calico-apiserver-certs\") pod \"calico-apiserver-674899bd6d-llcx9\" (UID: \"689618a0-74c2-4971-a557-2ab4b585a588\") " pod="calico-apiserver/calico-apiserver-674899bd6d-llcx9" Jul 7 06:08:04.469712 kubelet[2468]: I0707 06:08:04.469702 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f20084f-0209-4ec2-bfca-cd55b8ec8924-config-volume\") pod \"coredns-674b8bbfcf-7g75c\" (UID: \"7f20084f-0209-4ec2-bfca-cd55b8ec8924\") " pod="kube-system/coredns-674b8bbfcf-7g75c" Jul 7 06:08:04.469765 kubelet[2468]: I0707 06:08:04.469723 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7m6b\" (UniqueName: \"kubernetes.io/projected/467d15f7-ca34-419b-8788-314a876c49ce-kube-api-access-s7m6b\") pod \"calico-apiserver-674899bd6d-nhxd2\" (UID: \"467d15f7-ca34-419b-8788-314a876c49ce\") " pod="calico-apiserver/calico-apiserver-674899bd6d-nhxd2" Jul 7 06:08:04.469765 kubelet[2468]: I0707 06:08:04.469741 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klpsz\" (UniqueName: \"kubernetes.io/projected/7f20084f-0209-4ec2-bfca-cd55b8ec8924-kube-api-access-klpsz\") pod \"coredns-674b8bbfcf-7g75c\" (UID: \"7f20084f-0209-4ec2-bfca-cd55b8ec8924\") " pod="kube-system/coredns-674b8bbfcf-7g75c" Jul 7 06:08:04.470985 systemd[1]: Created slice kubepods-besteffort-pod85139026_c5f1_40c8_960a_03a764baad77.slice - libcontainer container kubepods-besteffort-pod85139026_c5f1_40c8_960a_03a764baad77.slice. Jul 7 06:08:04.484288 systemd[1]: Created slice kubepods-besteffort-pod086322d3_4360_4797_8169_4da48ff64f3a.slice - libcontainer container kubepods-besteffort-pod086322d3_4360_4797_8169_4da48ff64f3a.slice. Jul 7 06:08:04.493163 systemd[1]: Created slice kubepods-besteffort-pod9eb3733b_fa70_428d_847e_d3206b3573f6.slice - libcontainer container kubepods-besteffort-pod9eb3733b_fa70_428d_847e_d3206b3573f6.slice. Jul 7 06:08:04.499082 systemd[1]: Created slice kubepods-burstable-pod27409c61_b572_4164_bf93_9ca33f0fe80a.slice - libcontainer container kubepods-burstable-pod27409c61_b572_4164_bf93_9ca33f0fe80a.slice. Jul 7 06:08:04.503187 systemd[1]: Created slice kubepods-besteffort-pod467d15f7_ca34_419b_8788_314a876c49ce.slice - libcontainer container kubepods-besteffort-pod467d15f7_ca34_419b_8788_314a876c49ce.slice. Jul 7 06:08:04.570784 kubelet[2468]: I0707 06:08:04.570737 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lb46\" (UniqueName: \"kubernetes.io/projected/27409c61-b572-4164-bf93-9ca33f0fe80a-kube-api-access-4lb46\") pod \"coredns-674b8bbfcf-zp2bz\" (UID: \"27409c61-b572-4164-bf93-9ca33f0fe80a\") " pod="kube-system/coredns-674b8bbfcf-zp2bz" Jul 7 06:08:04.571391 kubelet[2468]: I0707 06:08:04.571338 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27409c61-b572-4164-bf93-9ca33f0fe80a-config-volume\") pod \"coredns-674b8bbfcf-zp2bz\" (UID: \"27409c61-b572-4164-bf93-9ca33f0fe80a\") " pod="kube-system/coredns-674b8bbfcf-zp2bz" Jul 7 06:08:04.755749 kubelet[2468]: E0707 06:08:04.755544 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:04.756696 containerd[1436]: time="2025-07-07T06:08:04.756549597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7g75c,Uid:7f20084f-0209-4ec2-bfca-cd55b8ec8924,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:04.768208 containerd[1436]: time="2025-07-07T06:08:04.768171302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674899bd6d-llcx9,Uid:689618a0-74c2-4971-a557-2ab4b585a588,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:08:04.781604 containerd[1436]: time="2025-07-07T06:08:04.781535536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5d5f8bf8-vtsgr,Uid:85139026-c5f1-40c8-960a-03a764baad77,Namespace:calico-system,Attempt:0,}" Jul 7 06:08:04.788789 containerd[1436]: time="2025-07-07T06:08:04.788261771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695fbfc78b-c6gs2,Uid:086322d3-4360-4797-8169-4da48ff64f3a,Namespace:calico-system,Attempt:0,}" Jul 7 06:08:04.801978 kubelet[2468]: E0707 06:08:04.801941 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:04.803779 containerd[1436]: time="2025-07-07T06:08:04.802847902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zp2bz,Uid:27409c61-b572-4164-bf93-9ca33f0fe80a,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:04.803779 containerd[1436]: time="2025-07-07T06:08:04.803073538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cfb5m,Uid:9eb3733b-fa70-428d-847e-d3206b3573f6,Namespace:calico-system,Attempt:0,}" Jul 7 06:08:04.816277 containerd[1436]: time="2025-07-07T06:08:04.816203015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674899bd6d-nhxd2,Uid:467d15f7-ca34-419b-8788-314a876c49ce,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:08:05.009695 containerd[1436]: time="2025-07-07T06:08:05.009419814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:08:05.134322 containerd[1436]: time="2025-07-07T06:08:05.134264332Z" level=error msg="Failed to destroy network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.135138 containerd[1436]: time="2025-07-07T06:08:05.134949841Z" level=error msg="encountered an error cleaning up failed sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.135217 containerd[1436]: time="2025-07-07T06:08:05.135160997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695fbfc78b-c6gs2,Uid:086322d3-4360-4797-8169-4da48ff64f3a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.135613 kubelet[2468]: E0707 06:08:05.135385 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.135613 kubelet[2468]: E0707 06:08:05.135450 2468 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695fbfc78b-c6gs2" Jul 7 06:08:05.140694 kubelet[2468]: E0707 06:08:05.140484 2468 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695fbfc78b-c6gs2" Jul 7 06:08:05.140694 kubelet[2468]: E0707 06:08:05.140589 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695fbfc78b-c6gs2_calico-system(086322d3-4360-4797-8169-4da48ff64f3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695fbfc78b-c6gs2_calico-system(086322d3-4360-4797-8169-4da48ff64f3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695fbfc78b-c6gs2" podUID="086322d3-4360-4797-8169-4da48ff64f3a" Jul 7 06:08:05.140984 containerd[1436]: time="2025-07-07T06:08:05.140848898Z" level=error msg="Failed to destroy network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.141884 containerd[1436]: time="2025-07-07T06:08:05.141799482Z" level=error msg="encountered an error cleaning up failed sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.141958 containerd[1436]: time="2025-07-07T06:08:05.141908840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cfb5m,Uid:9eb3733b-fa70-428d-847e-d3206b3573f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.142225 kubelet[2468]: E0707 06:08:05.142078 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.142225 kubelet[2468]: E0707 06:08:05.142125 2468 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-cfb5m" Jul 7 06:08:05.142225 kubelet[2468]: E0707 06:08:05.142143 2468 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-cfb5m" Jul 7 06:08:05.142341 kubelet[2468]: E0707 06:08:05.142181 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-cfb5m_calico-system(9eb3733b-fa70-428d-847e-d3206b3573f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-cfb5m_calico-system(9eb3733b-fa70-428d-847e-d3206b3573f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-cfb5m" podUID="9eb3733b-fa70-428d-847e-d3206b3573f6" Jul 7 06:08:05.142387 containerd[1436]: time="2025-07-07T06:08:05.142317673Z" level=error msg="Failed to destroy network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.146458 containerd[1436]: time="2025-07-07T06:08:05.146417602Z" level=error msg="encountered an error cleaning up failed sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.146539 containerd[1436]: time="2025-07-07T06:08:05.146477001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674899bd6d-llcx9,Uid:689618a0-74c2-4971-a557-2ab4b585a588,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.146724 kubelet[2468]: E0707 06:08:05.146663 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.146770 kubelet[2468]: E0707 06:08:05.146728 2468 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674899bd6d-llcx9" Jul 7 06:08:05.146770 kubelet[2468]: E0707 06:08:05.146745 2468 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674899bd6d-llcx9" Jul 7 06:08:05.146827 kubelet[2468]: E0707 06:08:05.146782 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674899bd6d-llcx9_calico-apiserver(689618a0-74c2-4971-a557-2ab4b585a588)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674899bd6d-llcx9_calico-apiserver(689618a0-74c2-4971-a557-2ab4b585a588)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674899bd6d-llcx9" podUID="689618a0-74c2-4971-a557-2ab4b585a588" Jul 7 06:08:05.150550 containerd[1436]: time="2025-07-07T06:08:05.150485172Z" level=error msg="Failed to destroy network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.150892 containerd[1436]: time="2025-07-07T06:08:05.150844685Z" level=error msg="encountered an error cleaning up failed sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.150944 containerd[1436]: time="2025-07-07T06:08:05.150897764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7g75c,Uid:7f20084f-0209-4ec2-bfca-cd55b8ec8924,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.151240 kubelet[2468]: E0707 06:08:05.151209 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.151321 kubelet[2468]: E0707 06:08:05.151255 2468 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7g75c" Jul 7 06:08:05.151321 kubelet[2468]: E0707 06:08:05.151273 2468 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7g75c" Jul 7 06:08:05.151396 kubelet[2468]: E0707 06:08:05.151313 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7g75c_kube-system(7f20084f-0209-4ec2-bfca-cd55b8ec8924)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7g75c_kube-system(7f20084f-0209-4ec2-bfca-cd55b8ec8924)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7g75c" podUID="7f20084f-0209-4ec2-bfca-cd55b8ec8924" Jul 7 06:08:05.157438 containerd[1436]: time="2025-07-07T06:08:05.157397572Z" level=error msg="Failed to destroy network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.158098 containerd[1436]: time="2025-07-07T06:08:05.158065960Z" level=error msg="Failed to destroy network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.158358 containerd[1436]: time="2025-07-07T06:08:05.158332276Z" level=error msg="encountered an error cleaning up failed sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.158398 containerd[1436]: time="2025-07-07T06:08:05.158376035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674899bd6d-nhxd2,Uid:467d15f7-ca34-419b-8788-314a876c49ce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.158732 containerd[1436]: time="2025-07-07T06:08:05.158638030Z" level=error msg="encountered an error cleaning up failed sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.158787 kubelet[2468]: E0707 06:08:05.158539 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.158787 kubelet[2468]: E0707 06:08:05.158593 2468 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674899bd6d-nhxd2" Jul 7 06:08:05.158787 kubelet[2468]: E0707 06:08:05.158615 2468 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674899bd6d-nhxd2" Jul 7 06:08:05.158869 containerd[1436]: time="2025-07-07T06:08:05.158726829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5d5f8bf8-vtsgr,Uid:85139026-c5f1-40c8-960a-03a764baad77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.158910 kubelet[2468]: E0707 06:08:05.158659 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674899bd6d-nhxd2_calico-apiserver(467d15f7-ca34-419b-8788-314a876c49ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674899bd6d-nhxd2_calico-apiserver(467d15f7-ca34-419b-8788-314a876c49ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674899bd6d-nhxd2" podUID="467d15f7-ca34-419b-8788-314a876c49ce" Jul 7 06:08:05.158910 kubelet[2468]: E0707 06:08:05.158882 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.158974 kubelet[2468]: E0707 06:08:05.158924 2468 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f5d5f8bf8-vtsgr" Jul 7 06:08:05.158974 kubelet[2468]: E0707 06:08:05.158957 2468 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f5d5f8bf8-vtsgr" Jul 7 06:08:05.159019 kubelet[2468]: E0707 06:08:05.158994 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f5d5f8bf8-vtsgr_calico-system(85139026-c5f1-40c8-960a-03a764baad77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f5d5f8bf8-vtsgr_calico-system(85139026-c5f1-40c8-960a-03a764baad77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f5d5f8bf8-vtsgr" podUID="85139026-c5f1-40c8-960a-03a764baad77" Jul 7 06:08:05.159730 containerd[1436]: time="2025-07-07T06:08:05.159437857Z" level=error msg="Failed to destroy network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.159943 containerd[1436]: time="2025-07-07T06:08:05.159915208Z" level=error msg="encountered an error cleaning up failed sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.160125 containerd[1436]: time="2025-07-07T06:08:05.160011087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zp2bz,Uid:27409c61-b572-4164-bf93-9ca33f0fe80a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.160355 kubelet[2468]: E0707 06:08:05.160328 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:05.160411 kubelet[2468]: E0707 06:08:05.160368 2468 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zp2bz" Jul 7 06:08:05.160411 kubelet[2468]: E0707 06:08:05.160384 2468 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zp2bz" Jul 7 06:08:05.160500 kubelet[2468]: E0707 06:08:05.160454 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zp2bz_kube-system(27409c61-b572-4164-bf93-9ca33f0fe80a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zp2bz_kube-system(27409c61-b572-4164-bf93-9ca33f0fe80a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zp2bz" podUID="27409c61-b572-4164-bf93-9ca33f0fe80a" Jul 7 06:08:05.686313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9-shm.mount: Deactivated successfully. Jul 7 06:08:05.686418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406-shm.mount: Deactivated successfully. Jul 7 06:08:05.871209 systemd[1]: Created slice kubepods-besteffort-podb4408009_2167_498a_9cbf_5110d3e01355.slice - libcontainer container kubepods-besteffort-podb4408009_2167_498a_9cbf_5110d3e01355.slice. Jul 7 06:08:05.875042 containerd[1436]: time="2025-07-07T06:08:05.875010869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwndf,Uid:b4408009-2167-498a-9cbf-5110d3e01355,Namespace:calico-system,Attempt:0,}" Jul 7 06:08:06.017769 containerd[1436]: time="2025-07-07T06:08:06.015795768Z" level=error msg="Failed to destroy network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.017769 containerd[1436]: time="2025-07-07T06:08:06.016140043Z" level=error msg="encountered an error cleaning up failed sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.017769 containerd[1436]: time="2025-07-07T06:08:06.016183202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwndf,Uid:b4408009-2167-498a-9cbf-5110d3e01355,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.025194 kubelet[2468]: E0707 06:08:06.025050 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.025194 kubelet[2468]: E0707 06:08:06.025111 2468 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fwndf" Jul 7 06:08:06.025194 kubelet[2468]: E0707 06:08:06.025136 2468 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fwndf" Jul 7 06:08:06.025390 kubelet[2468]: E0707 06:08:06.025179 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fwndf_calico-system(b4408009-2167-498a-9cbf-5110d3e01355)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fwndf_calico-system(b4408009-2167-498a-9cbf-5110d3e01355)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fwndf" podUID="b4408009-2167-498a-9cbf-5110d3e01355" Jul 7 06:08:06.027249 kubelet[2468]: I0707 06:08:06.026747 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:06.027389 containerd[1436]: time="2025-07-07T06:08:06.027348741Z" level=info msg="StopPodSandbox for \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\"" Jul 7 06:08:06.027547 containerd[1436]: time="2025-07-07T06:08:06.027519858Z" level=info msg="Ensure that sandbox a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d in task-service has been cleanup successfully" Jul 7 06:08:06.029246 kubelet[2468]: I0707 06:08:06.029212 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:06.029731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378-shm.mount: Deactivated successfully. Jul 7 06:08:06.030448 kubelet[2468]: I0707 06:08:06.030423 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:06.030954 containerd[1436]: time="2025-07-07T06:08:06.030905923Z" level=info msg="StopPodSandbox for \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\"" Jul 7 06:08:06.031102 containerd[1436]: time="2025-07-07T06:08:06.031074000Z" level=info msg="Ensure that sandbox 116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520 in task-service has been cleanup successfully" Jul 7 06:08:06.031484 containerd[1436]: time="2025-07-07T06:08:06.031452554Z" level=info msg="StopPodSandbox for \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\"" Jul 7 06:08:06.031623 containerd[1436]: time="2025-07-07T06:08:06.031593192Z" level=info msg="Ensure that sandbox 3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4 in task-service has been cleanup successfully" Jul 7 06:08:06.042437 kubelet[2468]: I0707 06:08:06.042247 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:06.042931 containerd[1436]: time="2025-07-07T06:08:06.042861209Z" level=info msg="StopPodSandbox for \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\"" Jul 7 06:08:06.043950 containerd[1436]: time="2025-07-07T06:08:06.043018206Z" level=info msg="Ensure that sandbox b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130 in task-service has been cleanup successfully" Jul 7 06:08:06.046400 kubelet[2468]: I0707 06:08:06.046371 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:06.046962 containerd[1436]: time="2025-07-07T06:08:06.046929703Z" level=info msg="StopPodSandbox for \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\"" Jul 7 06:08:06.047093 containerd[1436]: time="2025-07-07T06:08:06.047072861Z" level=info msg="Ensure that sandbox c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339 in task-service has been cleanup successfully" Jul 7 06:08:06.048841 kubelet[2468]: I0707 06:08:06.048817 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:06.050379 containerd[1436]: time="2025-07-07T06:08:06.049449062Z" level=info msg="StopPodSandbox for \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\"" Jul 7 06:08:06.050379 containerd[1436]: time="2025-07-07T06:08:06.049592900Z" level=info msg="Ensure that sandbox fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9 in task-service has been cleanup successfully" Jul 7 06:08:06.053072 kubelet[2468]: I0707 06:08:06.053046 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:06.053888 containerd[1436]: time="2025-07-07T06:08:06.053499756Z" level=info msg="StopPodSandbox for \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\"" Jul 7 06:08:06.053888 containerd[1436]: time="2025-07-07T06:08:06.053679793Z" level=info msg="Ensure that sandbox ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406 in task-service has been cleanup successfully" Jul 7 06:08:06.103319 containerd[1436]: time="2025-07-07T06:08:06.103267669Z" level=error msg="StopPodSandbox for \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\" failed" error="failed to destroy network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.103590 containerd[1436]: time="2025-07-07T06:08:06.103391707Z" level=error msg="StopPodSandbox for \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\" failed" error="failed to destroy network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.103626 kubelet[2468]: E0707 06:08:06.103509 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:06.103901 kubelet[2468]: E0707 06:08:06.103815 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:06.107369 kubelet[2468]: E0707 06:08:06.107177 2468 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339"} Jul 7 06:08:06.107369 kubelet[2468]: E0707 06:08:06.107262 2468 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"086322d3-4360-4797-8169-4da48ff64f3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:08:06.107369 kubelet[2468]: E0707 06:08:06.107289 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"086322d3-4360-4797-8169-4da48ff64f3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695fbfc78b-c6gs2" podUID="086322d3-4360-4797-8169-4da48ff64f3a" Jul 7 06:08:06.107738 kubelet[2468]: E0707 06:08:06.107629 2468 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9"} Jul 7 06:08:06.107738 kubelet[2468]: E0707 06:08:06.107694 2468 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"689618a0-74c2-4971-a557-2ab4b585a588\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:08:06.107738 kubelet[2468]: E0707 06:08:06.107717 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"689618a0-74c2-4971-a557-2ab4b585a588\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674899bd6d-llcx9" podUID="689618a0-74c2-4971-a557-2ab4b585a588" Jul 7 06:08:06.108867 containerd[1436]: time="2025-07-07T06:08:06.108649741Z" level=error msg="StopPodSandbox for \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\" failed" error="failed to destroy network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.109834 kubelet[2468]: E0707 06:08:06.109800 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:06.110494 kubelet[2468]: E0707 06:08:06.109844 2468 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d"} Jul 7 06:08:06.110494 kubelet[2468]: E0707 06:08:06.109870 2468 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"467d15f7-ca34-419b-8788-314a876c49ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:08:06.110494 kubelet[2468]: E0707 06:08:06.109888 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"467d15f7-ca34-419b-8788-314a876c49ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674899bd6d-nhxd2" podUID="467d15f7-ca34-419b-8788-314a876c49ce" Jul 7 06:08:06.120882 containerd[1436]: time="2025-07-07T06:08:06.120829144Z" level=error msg="StopPodSandbox for \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\" failed" error="failed to destroy network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.121103 kubelet[2468]: E0707 06:08:06.121053 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:06.121172 kubelet[2468]: E0707 06:08:06.121106 2468 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4"} Jul 7 06:08:06.121172 kubelet[2468]: E0707 06:08:06.121139 2468 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27409c61-b572-4164-bf93-9ca33f0fe80a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:08:06.121248 kubelet[2468]: E0707 06:08:06.121175 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27409c61-b572-4164-bf93-9ca33f0fe80a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zp2bz" podUID="27409c61-b572-4164-bf93-9ca33f0fe80a" Jul 7 06:08:06.121848 containerd[1436]: time="2025-07-07T06:08:06.121789528Z" level=error msg="StopPodSandbox for \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\" failed" error="failed to destroy network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.122041 kubelet[2468]: E0707 06:08:06.122001 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:06.122089 kubelet[2468]: E0707 06:08:06.122049 2468 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130"} Jul 7 06:08:06.122089 kubelet[2468]: E0707 06:08:06.122072 2468 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85139026-c5f1-40c8-960a-03a764baad77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:08:06.122164 kubelet[2468]: E0707 06:08:06.122089 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85139026-c5f1-40c8-960a-03a764baad77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f5d5f8bf8-vtsgr" podUID="85139026-c5f1-40c8-960a-03a764baad77" Jul 7 06:08:06.123647 containerd[1436]: time="2025-07-07T06:08:06.123506860Z" level=error msg="StopPodSandbox for \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\" failed" error="failed to destroy network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.123824 kubelet[2468]: E0707 06:08:06.123664 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:06.123824 kubelet[2468]: E0707 06:08:06.123706 2468 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520"} Jul 7 06:08:06.123824 kubelet[2468]: E0707 06:08:06.123729 2468 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9eb3733b-fa70-428d-847e-d3206b3573f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:08:06.123824 kubelet[2468]: E0707 06:08:06.123748 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9eb3733b-fa70-428d-847e-d3206b3573f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-cfb5m" podUID="9eb3733b-fa70-428d-847e-d3206b3573f6" Jul 7 06:08:06.129761 containerd[1436]: time="2025-07-07T06:08:06.129638681Z" level=error msg="StopPodSandbox for \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\" failed" error="failed to destroy network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:06.130097 kubelet[2468]: E0707 06:08:06.129962 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:06.130097 kubelet[2468]: E0707 06:08:06.130007 2468 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406"} Jul 7 06:08:06.130097 kubelet[2468]: E0707 06:08:06.130035 2468 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f20084f-0209-4ec2-bfca-cd55b8ec8924\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:08:06.130097 kubelet[2468]: E0707 06:08:06.130055 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f20084f-0209-4ec2-bfca-cd55b8ec8924\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7g75c" podUID="7f20084f-0209-4ec2-bfca-cd55b8ec8924" Jul 7 06:08:07.055705 kubelet[2468]: I0707 06:08:07.055653 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:07.057909 containerd[1436]: time="2025-07-07T06:08:07.057872674Z" level=info msg="StopPodSandbox for \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\"" Jul 7 06:08:07.058631 containerd[1436]: time="2025-07-07T06:08:07.058365666Z" level=info msg="Ensure that sandbox 90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378 in task-service has been cleanup successfully" Jul 7 06:08:07.084449 containerd[1436]: time="2025-07-07T06:08:07.084399670Z" level=error msg="StopPodSandbox for \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\" failed" error="failed to destroy network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:08:07.085060 kubelet[2468]: E0707 06:08:07.084876 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:07.085060 kubelet[2468]: E0707 06:08:07.084934 2468 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378"} Jul 7 06:08:07.085060 kubelet[2468]: E0707 06:08:07.084971 2468 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4408009-2167-498a-9cbf-5110d3e01355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:08:07.085060 kubelet[2468]: E0707 06:08:07.084992 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4408009-2167-498a-9cbf-5110d3e01355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fwndf" podUID="b4408009-2167-498a-9cbf-5110d3e01355" Jul 7 06:08:08.016938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635613089.mount: Deactivated successfully. Jul 7 06:08:08.289699 containerd[1436]: time="2025-07-07T06:08:08.289562288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:08.290146 containerd[1436]: time="2025-07-07T06:08:08.290042761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 06:08:08.290864 containerd[1436]: time="2025-07-07T06:08:08.290831030Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:08.293136 containerd[1436]: time="2025-07-07T06:08:08.292757723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:08.293587 containerd[1436]: time="2025-07-07T06:08:08.293560311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.284093178s" Jul 7 06:08:08.293649 containerd[1436]: time="2025-07-07T06:08:08.293592911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 06:08:08.302908 containerd[1436]: time="2025-07-07T06:08:08.302854339Z" level=info msg="CreateContainer within sandbox \"b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:08:08.316538 containerd[1436]: time="2025-07-07T06:08:08.316484104Z" level=info msg="CreateContainer within sandbox \"b4a1f04719c8edf6a428af7eb0d3b0e16022522df1dd87529bb24b30c6638d24\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"23a8fad5caa33d3f31cf622abca621dc676f06037d2c6cceb9c57238b1249342\"" Jul 7 06:08:08.317028 containerd[1436]: time="2025-07-07T06:08:08.317002057Z" level=info msg="StartContainer for \"23a8fad5caa33d3f31cf622abca621dc676f06037d2c6cceb9c57238b1249342\"" Jul 7 06:08:08.366988 systemd[1]: Started cri-containerd-23a8fad5caa33d3f31cf622abca621dc676f06037d2c6cceb9c57238b1249342.scope - libcontainer container 23a8fad5caa33d3f31cf622abca621dc676f06037d2c6cceb9c57238b1249342. Jul 7 06:08:08.397516 containerd[1436]: time="2025-07-07T06:08:08.394458632Z" level=info msg="StartContainer for \"23a8fad5caa33d3f31cf622abca621dc676f06037d2c6cceb9c57238b1249342\" returns successfully" Jul 7 06:08:08.614199 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:08:08.614321 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:08:08.889325 containerd[1436]: time="2025-07-07T06:08:08.889264334Z" level=info msg="StopPodSandbox for \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\"" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.000 [INFO][3765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.001 [INFO][3765] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" iface="eth0" netns="/var/run/netns/cni-6104a002-81d7-c7e7-ba61-0dcbd988dfc1" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.001 [INFO][3765] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" iface="eth0" netns="/var/run/netns/cni-6104a002-81d7-c7e7-ba61-0dcbd988dfc1" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.004 [INFO][3765] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" iface="eth0" netns="/var/run/netns/cni-6104a002-81d7-c7e7-ba61-0dcbd988dfc1" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.004 [INFO][3765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.006 [INFO][3765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.093 [INFO][3774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.093 [INFO][3774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.093 [INFO][3774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.109 [WARNING][3774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.109 [INFO][3774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.111 [INFO][3774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:09.118844 containerd[1436]: 2025-07-07 06:08:09.114 [INFO][3765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:09.119236 containerd[1436]: time="2025-07-07T06:08:09.119007642Z" level=info msg="TearDown network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\" successfully" Jul 7 06:08:09.119236 containerd[1436]: time="2025-07-07T06:08:09.119034482Z" level=info msg="StopPodSandbox for \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\" returns successfully" Jul 7 06:08:09.121420 systemd[1]: run-netns-cni\x2d6104a002\x2d81d7\x2dc7e7\x2dba61\x2d0dcbd988dfc1.mount: Deactivated successfully. Jul 7 06:08:09.305827 kubelet[2468]: I0707 06:08:09.305783 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85139026-c5f1-40c8-960a-03a764baad77-whisker-backend-key-pair\") pod \"85139026-c5f1-40c8-960a-03a764baad77\" (UID: \"85139026-c5f1-40c8-960a-03a764baad77\") " Jul 7 06:08:09.306230 kubelet[2468]: I0707 06:08:09.305842 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8czj\" (UniqueName: \"kubernetes.io/projected/85139026-c5f1-40c8-960a-03a764baad77-kube-api-access-k8czj\") pod \"85139026-c5f1-40c8-960a-03a764baad77\" (UID: \"85139026-c5f1-40c8-960a-03a764baad77\") " Jul 7 06:08:09.306230 kubelet[2468]: I0707 06:08:09.306168 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85139026-c5f1-40c8-960a-03a764baad77-whisker-ca-bundle\") pod \"85139026-c5f1-40c8-960a-03a764baad77\" (UID: \"85139026-c5f1-40c8-960a-03a764baad77\") " Jul 7 06:08:09.316098 systemd[1]: var-lib-kubelet-pods-85139026\x2dc5f1\x2d40c8\x2d960a\x2d03a764baad77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk8czj.mount: Deactivated successfully. Jul 7 06:08:09.318922 kubelet[2468]: I0707 06:08:09.318887 2468 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85139026-c5f1-40c8-960a-03a764baad77-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "85139026-c5f1-40c8-960a-03a764baad77" (UID: "85139026-c5f1-40c8-960a-03a764baad77"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:08:09.319153 kubelet[2468]: I0707 06:08:09.318902 2468 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85139026-c5f1-40c8-960a-03a764baad77-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "85139026-c5f1-40c8-960a-03a764baad77" (UID: "85139026-c5f1-40c8-960a-03a764baad77"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:08:09.319872 systemd[1]: var-lib-kubelet-pods-85139026\x2dc5f1\x2d40c8\x2d960a\x2d03a764baad77-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:08:09.320650 kubelet[2468]: I0707 06:08:09.320606 2468 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85139026-c5f1-40c8-960a-03a764baad77-kube-api-access-k8czj" (OuterVolumeSpecName: "kube-api-access-k8czj") pod "85139026-c5f1-40c8-960a-03a764baad77" (UID: "85139026-c5f1-40c8-960a-03a764baad77"). InnerVolumeSpecName "kube-api-access-k8czj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:08:09.407190 kubelet[2468]: I0707 06:08:09.407105 2468 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85139026-c5f1-40c8-960a-03a764baad77-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:08:09.407190 kubelet[2468]: I0707 06:08:09.407136 2468 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k8czj\" (UniqueName: \"kubernetes.io/projected/85139026-c5f1-40c8-960a-03a764baad77-kube-api-access-k8czj\") on node \"localhost\" DevicePath \"\"" Jul 7 06:08:09.407190 kubelet[2468]: I0707 06:08:09.407144 2468 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85139026-c5f1-40c8-960a-03a764baad77-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:08:09.590703 systemd[1]: Removed slice kubepods-besteffort-pod85139026_c5f1_40c8_960a_03a764baad77.slice - libcontainer container kubepods-besteffort-pod85139026_c5f1_40c8_960a_03a764baad77.slice. Jul 7 06:08:09.601933 kubelet[2468]: I0707 06:08:09.601866 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7xpvr" podStartSLOduration=2.674603276 podStartE2EDuration="12.601851226s" podCreationTimestamp="2025-07-07 06:07:57 +0000 UTC" firstStartedPulling="2025-07-07 06:07:58.367065111 +0000 UTC m=+21.591677134" lastFinishedPulling="2025-07-07 06:08:08.294313061 +0000 UTC m=+31.518925084" observedRunningTime="2025-07-07 06:08:09.301456523 +0000 UTC m=+32.526068546" watchObservedRunningTime="2025-07-07 06:08:09.601851226 +0000 UTC m=+32.826463209" Jul 7 06:08:09.644038 systemd[1]: Created slice kubepods-besteffort-pod075e8f28_a10d_475e_b365_48642dcd3f3b.slice - libcontainer container kubepods-besteffort-pod075e8f28_a10d_475e_b365_48642dcd3f3b.slice. Jul 7 06:08:09.809275 kubelet[2468]: I0707 06:08:09.809232 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/075e8f28-a10d-475e-b365-48642dcd3f3b-whisker-ca-bundle\") pod \"whisker-84f4fd8778-b8fcx\" (UID: \"075e8f28-a10d-475e-b365-48642dcd3f3b\") " pod="calico-system/whisker-84f4fd8778-b8fcx" Jul 7 06:08:09.809275 kubelet[2468]: I0707 06:08:09.809284 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmldp\" (UniqueName: \"kubernetes.io/projected/075e8f28-a10d-475e-b365-48642dcd3f3b-kube-api-access-qmldp\") pod \"whisker-84f4fd8778-b8fcx\" (UID: \"075e8f28-a10d-475e-b365-48642dcd3f3b\") " pod="calico-system/whisker-84f4fd8778-b8fcx" Jul 7 06:08:09.809541 kubelet[2468]: I0707 06:08:09.809368 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/075e8f28-a10d-475e-b365-48642dcd3f3b-whisker-backend-key-pair\") pod \"whisker-84f4fd8778-b8fcx\" (UID: \"075e8f28-a10d-475e-b365-48642dcd3f3b\") " pod="calico-system/whisker-84f4fd8778-b8fcx" Jul 7 06:08:09.947534 containerd[1436]: time="2025-07-07T06:08:09.947481644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f4fd8778-b8fcx,Uid:075e8f28-a10d-475e-b365-48642dcd3f3b,Namespace:calico-system,Attempt:0,}" Jul 7 06:08:10.054447 systemd-networkd[1373]: calie8fe63fc541: Link UP Jul 7 06:08:10.054659 systemd-networkd[1373]: calie8fe63fc541: Gained carrier Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:09.975 [INFO][3796] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:09.989 [INFO][3796] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84f4fd8778--b8fcx-eth0 whisker-84f4fd8778- calico-system 075e8f28-a10d-475e-b365-48642dcd3f3b 945 0 2025-07-07 06:08:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84f4fd8778 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84f4fd8778-b8fcx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie8fe63fc541 [] [] }} ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Namespace="calico-system" Pod="whisker-84f4fd8778-b8fcx" WorkloadEndpoint="localhost-k8s-whisker--84f4fd8778--b8fcx-" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:09.989 [INFO][3796] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Namespace="calico-system" Pod="whisker-84f4fd8778-b8fcx" WorkloadEndpoint="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.011 [INFO][3810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" HandleID="k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Workload="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.011 [INFO][3810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" HandleID="k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Workload="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84f4fd8778-b8fcx", "timestamp":"2025-07-07 06:08:10.011759353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.011 [INFO][3810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.012 [INFO][3810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.012 [INFO][3810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.022 [INFO][3810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.029 [INFO][3810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.033 [INFO][3810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.035 [INFO][3810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.037 [INFO][3810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.037 [INFO][3810] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.038 [INFO][3810] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.041 [INFO][3810] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.046 [INFO][3810] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.046 [INFO][3810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" host="localhost" Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.046 [INFO][3810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:10.068657 containerd[1436]: 2025-07-07 06:08:10.046 [INFO][3810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" HandleID="k8s-pod-network.d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Workload="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" Jul 7 06:08:10.069223 containerd[1436]: 2025-07-07 06:08:10.048 [INFO][3796] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Namespace="calico-system" Pod="whisker-84f4fd8778-b8fcx" WorkloadEndpoint="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84f4fd8778--b8fcx-eth0", GenerateName:"whisker-84f4fd8778-", Namespace:"calico-system", SelfLink:"", UID:"075e8f28-a10d-475e-b365-48642dcd3f3b", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84f4fd8778", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84f4fd8778-b8fcx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie8fe63fc541", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:10.069223 containerd[1436]: 2025-07-07 06:08:10.048 [INFO][3796] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Namespace="calico-system" Pod="whisker-84f4fd8778-b8fcx" WorkloadEndpoint="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" Jul 7 06:08:10.069223 containerd[1436]: 2025-07-07 06:08:10.048 [INFO][3796] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8fe63fc541 ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Namespace="calico-system" Pod="whisker-84f4fd8778-b8fcx" WorkloadEndpoint="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" Jul 7 06:08:10.069223 containerd[1436]: 2025-07-07 06:08:10.054 [INFO][3796] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Namespace="calico-system" Pod="whisker-84f4fd8778-b8fcx" WorkloadEndpoint="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" Jul 7 06:08:10.069223 containerd[1436]: 2025-07-07 06:08:10.055 [INFO][3796] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Namespace="calico-system" Pod="whisker-84f4fd8778-b8fcx" WorkloadEndpoint="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84f4fd8778--b8fcx-eth0", GenerateName:"whisker-84f4fd8778-", Namespace:"calico-system", SelfLink:"", UID:"075e8f28-a10d-475e-b365-48642dcd3f3b", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84f4fd8778", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf", Pod:"whisker-84f4fd8778-b8fcx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie8fe63fc541", MAC:"c2:c9:41:19:67:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:10.069223 containerd[1436]: 2025-07-07 06:08:10.066 [INFO][3796] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf" Namespace="calico-system" Pod="whisker-84f4fd8778-b8fcx" WorkloadEndpoint="localhost-k8s-whisker--84f4fd8778--b8fcx-eth0" Jul 7 06:08:10.082332 containerd[1436]: time="2025-07-07T06:08:10.082240590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:10.082332 containerd[1436]: time="2025-07-07T06:08:10.082298269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:10.082332 containerd[1436]: time="2025-07-07T06:08:10.082308989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:10.082783 containerd[1436]: time="2025-07-07T06:08:10.082380348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:10.110886 systemd[1]: Started cri-containerd-d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf.scope - libcontainer container d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf. Jul 7 06:08:10.120471 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:08:10.137578 containerd[1436]: time="2025-07-07T06:08:10.137513577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f4fd8778-b8fcx,Uid:075e8f28-a10d-475e-b365-48642dcd3f3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf\"" Jul 7 06:08:10.139125 containerd[1436]: time="2025-07-07T06:08:10.139099317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:08:10.291679 kubelet[2468]: I0707 06:08:10.291445 2468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:08:10.866401 kubelet[2468]: I0707 06:08:10.866350 2468 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85139026-c5f1-40c8-960a-03a764baad77" path="/var/lib/kubelet/pods/85139026-c5f1-40c8-960a-03a764baad77/volumes" Jul 7 06:08:10.983149 containerd[1436]: time="2025-07-07T06:08:10.983097096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:10.984863 containerd[1436]: time="2025-07-07T06:08:10.984830234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 06:08:10.994830 containerd[1436]: time="2025-07-07T06:08:10.994781069Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:10.997205 containerd[1436]: time="2025-07-07T06:08:10.997164719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:10.997948 containerd[1436]: time="2025-07-07T06:08:10.997871431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 858.674675ms" Jul 7 06:08:10.997948 containerd[1436]: time="2025-07-07T06:08:10.997899750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 06:08:11.002160 containerd[1436]: time="2025-07-07T06:08:11.002126258Z" level=info msg="CreateContainer within sandbox \"d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:08:11.011617 containerd[1436]: time="2025-07-07T06:08:11.011507868Z" level=info msg="CreateContainer within sandbox \"d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6121c16d97a41dd4092a85e201eeab065c7de5969d98bc82d6e035236e848b17\"" Jul 7 06:08:11.012712 containerd[1436]: time="2025-07-07T06:08:11.012532616Z" level=info msg="StartContainer for \"6121c16d97a41dd4092a85e201eeab065c7de5969d98bc82d6e035236e848b17\"" Jul 7 06:08:11.051829 systemd[1]: Started cri-containerd-6121c16d97a41dd4092a85e201eeab065c7de5969d98bc82d6e035236e848b17.scope - libcontainer container 6121c16d97a41dd4092a85e201eeab065c7de5969d98bc82d6e035236e848b17. Jul 7 06:08:11.079654 containerd[1436]: time="2025-07-07T06:08:11.079583948Z" level=info msg="StartContainer for \"6121c16d97a41dd4092a85e201eeab065c7de5969d98bc82d6e035236e848b17\" returns successfully" Jul 7 06:08:11.081444 containerd[1436]: time="2025-07-07T06:08:11.081415366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:08:11.205827 systemd-networkd[1373]: calie8fe63fc541: Gained IPv6LL Jul 7 06:08:12.329918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1381201424.mount: Deactivated successfully. Jul 7 06:08:12.344949 containerd[1436]: time="2025-07-07T06:08:12.343817410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 06:08:12.344949 containerd[1436]: time="2025-07-07T06:08:12.344497370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:12.346090 containerd[1436]: time="2025-07-07T06:08:12.346053930Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:12.347536 containerd[1436]: time="2025-07-07T06:08:12.347495771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.266049005s" Jul 7 06:08:12.347536 containerd[1436]: time="2025-07-07T06:08:12.347531131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 06:08:12.347936 containerd[1436]: time="2025-07-07T06:08:12.347905891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:12.352020 containerd[1436]: time="2025-07-07T06:08:12.351987372Z" level=info msg="CreateContainer within sandbox \"d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:08:12.371979 containerd[1436]: time="2025-07-07T06:08:12.371930778Z" level=info msg="CreateContainer within sandbox \"d30b702f250b2cb75176a907def84af4b319260f925187647c399ad3b8a4abcf\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8da910d33258fe0fc0394b3782c254f1bcd2d045769e0808d2532ae6f5a4e167\"" Jul 7 06:08:12.372549 containerd[1436]: time="2025-07-07T06:08:12.372476018Z" level=info msg="StartContainer for \"8da910d33258fe0fc0394b3782c254f1bcd2d045769e0808d2532ae6f5a4e167\"" Jul 7 06:08:12.403861 systemd[1]: Started cri-containerd-8da910d33258fe0fc0394b3782c254f1bcd2d045769e0808d2532ae6f5a4e167.scope - libcontainer container 8da910d33258fe0fc0394b3782c254f1bcd2d045769e0808d2532ae6f5a4e167. Jul 7 06:08:12.441914 containerd[1436]: time="2025-07-07T06:08:12.441865798Z" level=info msg="StartContainer for \"8da910d33258fe0fc0394b3782c254f1bcd2d045769e0808d2532ae6f5a4e167\" returns successfully" Jul 7 06:08:13.308199 kubelet[2468]: I0707 06:08:13.308122 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-84f4fd8778-b8fcx" podStartSLOduration=2.09795116 podStartE2EDuration="4.30806125s" podCreationTimestamp="2025-07-07 06:08:09 +0000 UTC" firstStartedPulling="2025-07-07 06:08:10.138790161 +0000 UTC m=+33.363402184" lastFinishedPulling="2025-07-07 06:08:12.348900251 +0000 UTC m=+35.573512274" observedRunningTime="2025-07-07 06:08:13.30754525 +0000 UTC m=+36.532157273" watchObservedRunningTime="2025-07-07 06:08:13.30806125 +0000 UTC m=+36.532673273" Jul 7 06:08:16.864450 containerd[1436]: time="2025-07-07T06:08:16.864395701Z" level=info msg="StopPodSandbox for \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\"" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.912 [INFO][4219] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.913 [INFO][4219] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" iface="eth0" netns="/var/run/netns/cni-519c1c40-e10d-51ce-cd0e-7ab4a396f190" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.913 [INFO][4219] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" iface="eth0" netns="/var/run/netns/cni-519c1c40-e10d-51ce-cd0e-7ab4a396f190" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.913 [INFO][4219] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" iface="eth0" netns="/var/run/netns/cni-519c1c40-e10d-51ce-cd0e-7ab4a396f190" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.913 [INFO][4219] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.913 [INFO][4219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.934 [INFO][4228] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.935 [INFO][4228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.935 [INFO][4228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.943 [WARNING][4228] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.943 [INFO][4228] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.945 [INFO][4228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:16.948682 containerd[1436]: 2025-07-07 06:08:16.946 [INFO][4219] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:16.949962 containerd[1436]: time="2025-07-07T06:08:16.948763483Z" level=info msg="TearDown network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\" successfully" Jul 7 06:08:16.949962 containerd[1436]: time="2025-07-07T06:08:16.948791443Z" level=info msg="StopPodSandbox for \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\" returns successfully" Jul 7 06:08:16.951174 containerd[1436]: time="2025-07-07T06:08:16.950198644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695fbfc78b-c6gs2,Uid:086322d3-4360-4797-8169-4da48ff64f3a,Namespace:calico-system,Attempt:1,}" Jul 7 06:08:16.951957 systemd[1]: run-netns-cni\x2d519c1c40\x2de10d\x2d51ce\x2dcd0e\x2d7ab4a396f190.mount: Deactivated successfully. Jul 7 06:08:17.105198 systemd-networkd[1373]: cali25b919a6447: Link UP Jul 7 06:08:17.105408 systemd-networkd[1373]: cali25b919a6447: Gained carrier Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.016 [INFO][4237] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.030 [INFO][4237] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0 calico-kube-controllers-695fbfc78b- calico-system 086322d3-4360-4797-8169-4da48ff64f3a 980 0 2025-07-07 06:07:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:695fbfc78b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-695fbfc78b-c6gs2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali25b919a6447 [] [] }} ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Namespace="calico-system" Pod="calico-kube-controllers-695fbfc78b-c6gs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.030 [INFO][4237] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Namespace="calico-system" Pod="calico-kube-controllers-695fbfc78b-c6gs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.051 [INFO][4251] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" HandleID="k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.051 [INFO][4251] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" HandleID="k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000591de0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-695fbfc78b-c6gs2", "timestamp":"2025-07-07 06:08:17.05182479 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.052 [INFO][4251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.052 [INFO][4251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.052 [INFO][4251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.072 [INFO][4251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.077 [INFO][4251] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.083 [INFO][4251] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.085 [INFO][4251] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.087 [INFO][4251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.087 [INFO][4251] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.089 [INFO][4251] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5 Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.095 [INFO][4251] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.101 [INFO][4251] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.101 [INFO][4251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" host="localhost" Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.101 [INFO][4251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:17.126976 containerd[1436]: 2025-07-07 06:08:17.101 [INFO][4251] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" HandleID="k8s-pod-network.f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:17.127591 containerd[1436]: 2025-07-07 06:08:17.103 [INFO][4237] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Namespace="calico-system" Pod="calico-kube-controllers-695fbfc78b-c6gs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0", GenerateName:"calico-kube-controllers-695fbfc78b-", Namespace:"calico-system", SelfLink:"", UID:"086322d3-4360-4797-8169-4da48ff64f3a", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695fbfc78b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-695fbfc78b-c6gs2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25b919a6447", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:17.127591 containerd[1436]: 2025-07-07 06:08:17.103 [INFO][4237] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Namespace="calico-system" Pod="calico-kube-controllers-695fbfc78b-c6gs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:17.127591 containerd[1436]: 2025-07-07 06:08:17.103 [INFO][4237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25b919a6447 ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Namespace="calico-system" Pod="calico-kube-controllers-695fbfc78b-c6gs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:17.127591 containerd[1436]: 2025-07-07 06:08:17.106 [INFO][4237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Namespace="calico-system" Pod="calico-kube-controllers-695fbfc78b-c6gs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:17.127591 containerd[1436]: 2025-07-07 06:08:17.106 [INFO][4237] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Namespace="calico-system" Pod="calico-kube-controllers-695fbfc78b-c6gs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0", GenerateName:"calico-kube-controllers-695fbfc78b-", Namespace:"calico-system", SelfLink:"", UID:"086322d3-4360-4797-8169-4da48ff64f3a", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695fbfc78b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5", Pod:"calico-kube-controllers-695fbfc78b-c6gs2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25b919a6447", MAC:"7a:b1:b9:97:3e:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:17.127591 containerd[1436]: 2025-07-07 06:08:17.124 [INFO][4237] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5" Namespace="calico-system" Pod="calico-kube-controllers-695fbfc78b-c6gs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:17.147916 containerd[1436]: time="2025-07-07T06:08:17.147472174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:17.147916 containerd[1436]: time="2025-07-07T06:08:17.147869174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:17.147916 containerd[1436]: time="2025-07-07T06:08:17.147882294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:17.148106 containerd[1436]: time="2025-07-07T06:08:17.147962374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:17.178851 systemd[1]: Started cri-containerd-f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5.scope - libcontainer container f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5. Jul 7 06:08:17.195008 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:08:17.211412 containerd[1436]: time="2025-07-07T06:08:17.211358631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695fbfc78b-c6gs2,Uid:086322d3-4360-4797-8169-4da48ff64f3a,Namespace:calico-system,Attempt:1,} returns sandbox id \"f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5\"" Jul 7 06:08:17.213586 containerd[1436]: time="2025-07-07T06:08:17.213552191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:08:17.865105 containerd[1436]: time="2025-07-07T06:08:17.865049837Z" level=info msg="StopPodSandbox for \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\"" Jul 7 06:08:17.865796 containerd[1436]: time="2025-07-07T06:08:17.865542517Z" level=info msg="StopPodSandbox for \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\"" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.943 [INFO][4351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.944 [INFO][4351] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" iface="eth0" netns="/var/run/netns/cni-705f814c-be3e-56fa-eb2f-5b668a0cf846" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.944 [INFO][4351] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" iface="eth0" netns="/var/run/netns/cni-705f814c-be3e-56fa-eb2f-5b668a0cf846" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.944 [INFO][4351] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" iface="eth0" netns="/var/run/netns/cni-705f814c-be3e-56fa-eb2f-5b668a0cf846" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.944 [INFO][4351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.944 [INFO][4351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.971 [INFO][4372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.971 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.971 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.985 [WARNING][4372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.985 [INFO][4372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.988 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:17.991412 containerd[1436]: 2025-07-07 06:08:17.989 [INFO][4351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:17.992070 containerd[1436]: time="2025-07-07T06:08:17.991929470Z" level=info msg="TearDown network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\" successfully" Jul 7 06:08:17.992070 containerd[1436]: time="2025-07-07T06:08:17.991963790Z" level=info msg="StopPodSandbox for \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\" returns successfully" Jul 7 06:08:17.992849 containerd[1436]: time="2025-07-07T06:08:17.992818110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cfb5m,Uid:9eb3733b-fa70-428d-847e-d3206b3573f6,Namespace:calico-system,Attempt:1,}" Jul 7 06:08:17.993639 systemd[1]: run-netns-cni\x2d705f814c\x2dbe3e\x2d56fa\x2deb2f\x2d5b668a0cf846.mount: Deactivated successfully. Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:17.956 [INFO][4352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:17.956 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" iface="eth0" netns="/var/run/netns/cni-01674f2b-21d2-348e-6a4b-c3a7ab85e220" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:17.957 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" iface="eth0" netns="/var/run/netns/cni-01674f2b-21d2-348e-6a4b-c3a7ab85e220" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:17.957 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" iface="eth0" netns="/var/run/netns/cni-01674f2b-21d2-348e-6a4b-c3a7ab85e220" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:17.957 [INFO][4352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:17.957 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:18.010 [INFO][4378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:18.010 [INFO][4378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:18.010 [INFO][4378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:18.033 [WARNING][4378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:18.033 [INFO][4378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:18.037 [INFO][4378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:18.042015 containerd[1436]: 2025-07-07 06:08:18.039 [INFO][4352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:18.042015 containerd[1436]: time="2025-07-07T06:08:18.041990442Z" level=info msg="TearDown network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\" successfully" Jul 7 06:08:18.042547 containerd[1436]: time="2025-07-07T06:08:18.042020282Z" level=info msg="StopPodSandbox for \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\" returns successfully" Jul 7 06:08:18.044639 containerd[1436]: time="2025-07-07T06:08:18.043499723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwndf,Uid:b4408009-2167-498a-9cbf-5110d3e01355,Namespace:calico-system,Attempt:1,}" Jul 7 06:08:18.045520 systemd[1]: run-netns-cni\x2d01674f2b\x2d21d2\x2d348e\x2d6a4b\x2dc3a7ab85e220.mount: Deactivated successfully. Jul 7 06:08:18.216838 systemd-networkd[1373]: cali1df61e19253: Link UP Jul 7 06:08:18.217763 systemd-networkd[1373]: cali1df61e19253: Gained carrier Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.084 [INFO][4399] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.097 [INFO][4399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fwndf-eth0 csi-node-driver- calico-system b4408009-2167-498a-9cbf-5110d3e01355 1000 0 2025-07-07 06:07:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fwndf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1df61e19253 [] [] }} ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Namespace="calico-system" Pod="csi-node-driver-fwndf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fwndf-" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.098 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Namespace="calico-system" Pod="csi-node-driver-fwndf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.129 [INFO][4416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" HandleID="k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.131 [INFO][4416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" HandleID="k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fwndf", "timestamp":"2025-07-07 06:08:18.129824464 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.131 [INFO][4416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.131 [INFO][4416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.131 [INFO][4416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.148 [INFO][4416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.174 [INFO][4416] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.186 [INFO][4416] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.188 [INFO][4416] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.191 [INFO][4416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.191 [INFO][4416] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.195 [INFO][4416] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.198 [INFO][4416] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.204 [INFO][4416] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.204 [INFO][4416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" host="localhost" Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.204 [INFO][4416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:18.228278 containerd[1436]: 2025-07-07 06:08:18.204 [INFO][4416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" HandleID="k8s-pod-network.a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.228955 containerd[1436]: 2025-07-07 06:08:18.207 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Namespace="calico-system" Pod="csi-node-driver-fwndf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fwndf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fwndf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4408009-2167-498a-9cbf-5110d3e01355", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fwndf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1df61e19253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:18.228955 containerd[1436]: 2025-07-07 06:08:18.207 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Namespace="calico-system" Pod="csi-node-driver-fwndf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.228955 containerd[1436]: 2025-07-07 06:08:18.207 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1df61e19253 ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Namespace="calico-system" Pod="csi-node-driver-fwndf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.228955 containerd[1436]: 2025-07-07 06:08:18.211 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Namespace="calico-system" Pod="csi-node-driver-fwndf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.228955 containerd[1436]: 2025-07-07 06:08:18.211 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Namespace="calico-system" Pod="csi-node-driver-fwndf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fwndf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fwndf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4408009-2167-498a-9cbf-5110d3e01355", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d", Pod:"csi-node-driver-fwndf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1df61e19253", MAC:"9e:ec:a4:d0:e0:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:18.228955 containerd[1436]: 2025-07-07 06:08:18.225 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d" Namespace="calico-system" Pod="csi-node-driver-fwndf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:18.281164 containerd[1436]: time="2025-07-07T06:08:18.281069302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:18.281164 containerd[1436]: time="2025-07-07T06:08:18.281119582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:18.281164 containerd[1436]: time="2025-07-07T06:08:18.281130382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:18.281559 containerd[1436]: time="2025-07-07T06:08:18.281202182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:18.304933 systemd[1]: Started cri-containerd-a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d.scope - libcontainer container a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d. Jul 7 06:08:18.313073 systemd-networkd[1373]: cali5a293864f21: Link UP Jul 7 06:08:18.313223 systemd-networkd[1373]: cali5a293864f21: Gained carrier Jul 7 06:08:18.323872 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.077 [INFO][4388] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.098 [INFO][4388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0 goldmane-768f4c5c69- calico-system 9eb3733b-fa70-428d-847e-d3206b3573f6 999 0 2025-07-07 06:07:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-cfb5m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5a293864f21 [] [] }} ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Namespace="calico-system" Pod="goldmane-768f4c5c69-cfb5m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cfb5m-" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.098 [INFO][4388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Namespace="calico-system" Pod="goldmane-768f4c5c69-cfb5m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.168 [INFO][4422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" HandleID="k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.168 [INFO][4422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" HandleID="k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-cfb5m", "timestamp":"2025-07-07 06:08:18.167999273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.168 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.206 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.207 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.243 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.259 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.285 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.288 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.291 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.291 [INFO][4422] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.294 [INFO][4422] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.299 [INFO][4422] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.305 [INFO][4422] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.306 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" host="localhost" Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.306 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:18.334168 containerd[1436]: 2025-07-07 06:08:18.306 [INFO][4422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" HandleID="k8s-pod-network.c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:18.334761 containerd[1436]: 2025-07-07 06:08:18.308 [INFO][4388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Namespace="calico-system" Pod="goldmane-768f4c5c69-cfb5m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"9eb3733b-fa70-428d-847e-d3206b3573f6", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-cfb5m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a293864f21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:18.334761 containerd[1436]: 2025-07-07 06:08:18.309 [INFO][4388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Namespace="calico-system" Pod="goldmane-768f4c5c69-cfb5m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:18.334761 containerd[1436]: 2025-07-07 06:08:18.309 [INFO][4388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a293864f21 ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Namespace="calico-system" Pod="goldmane-768f4c5c69-cfb5m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:18.334761 containerd[1436]: 2025-07-07 06:08:18.314 [INFO][4388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Namespace="calico-system" Pod="goldmane-768f4c5c69-cfb5m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:18.334761 containerd[1436]: 2025-07-07 06:08:18.315 [INFO][4388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Namespace="calico-system" Pod="goldmane-768f4c5c69-cfb5m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"9eb3733b-fa70-428d-847e-d3206b3573f6", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d", Pod:"goldmane-768f4c5c69-cfb5m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a293864f21", MAC:"c2:e1:a6:20:af:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:18.334761 containerd[1436]: 2025-07-07 06:08:18.328 [INFO][4388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d" Namespace="calico-system" Pod="goldmane-768f4c5c69-cfb5m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:18.352608 containerd[1436]: time="2025-07-07T06:08:18.352553679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwndf,Uid:b4408009-2167-498a-9cbf-5110d3e01355,Namespace:calico-system,Attempt:1,} returns sandbox id \"a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d\"" Jul 7 06:08:18.359787 containerd[1436]: time="2025-07-07T06:08:18.359684321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:18.359787 containerd[1436]: time="2025-07-07T06:08:18.359750641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:18.359998 containerd[1436]: time="2025-07-07T06:08:18.359774561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:18.359998 containerd[1436]: time="2025-07-07T06:08:18.359867481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:18.377824 systemd[1]: Started cri-containerd-c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d.scope - libcontainer container c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d. Jul 7 06:08:18.392344 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:08:18.419864 containerd[1436]: time="2025-07-07T06:08:18.419813936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cfb5m,Uid:9eb3733b-fa70-428d-847e-d3206b3573f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d\"" Jul 7 06:08:18.442861 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:33010.service - OpenSSH per-connection server daemon (10.0.0.1:33010). Jul 7 06:08:18.497458 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 33010 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:18.497640 sshd[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:18.502585 systemd-logind[1424]: New session 8 of user core. Jul 7 06:08:18.511845 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:08:18.808733 sshd[4539]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:18.812242 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:33010.service: Deactivated successfully. Jul 7 06:08:18.815643 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:08:18.817155 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:08:18.818714 systemd-logind[1424]: Removed session 8. Jul 7 06:08:18.865426 containerd[1436]: time="2025-07-07T06:08:18.865142206Z" level=info msg="StopPodSandbox for \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\"" Jul 7 06:08:18.886810 systemd-networkd[1373]: cali25b919a6447: Gained IPv6LL Jul 7 06:08:19.300524 containerd[1436]: time="2025-07-07T06:08:19.300456152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:19.302754 containerd[1436]: time="2025-07-07T06:08:19.302708633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 06:08:19.306460 containerd[1436]: time="2025-07-07T06:08:19.306165354Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:19.312223 containerd[1436]: time="2025-07-07T06:08:19.312158795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:19.313023 containerd[1436]: time="2025-07-07T06:08:19.312971355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.099221724s" Jul 7 06:08:19.313023 containerd[1436]: time="2025-07-07T06:08:19.313012755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 06:08:19.317224 containerd[1436]: time="2025-07-07T06:08:19.316773956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:08:19.329001 containerd[1436]: time="2025-07-07T06:08:19.328817119Z" level=info msg="CreateContainer within sandbox \"f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:08:19.348744 containerd[1436]: time="2025-07-07T06:08:19.348660284Z" level=info msg="CreateContainer within sandbox \"f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5fb90454cf224725f5a1f26d4c49a54a52edc47862d3e06258a87f6989be9c94\"" Jul 7 06:08:19.349300 containerd[1436]: time="2025-07-07T06:08:19.349262804Z" level=info msg="StartContainer for \"5fb90454cf224725f5a1f26d4c49a54a52edc47862d3e06258a87f6989be9c94\"" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.304 [INFO][4573] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.305 [INFO][4573] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" iface="eth0" netns="/var/run/netns/cni-9d531292-e474-5e69-ba12-13a84b85b3ea" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.305 [INFO][4573] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" iface="eth0" netns="/var/run/netns/cni-9d531292-e474-5e69-ba12-13a84b85b3ea" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.307 [INFO][4573] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" iface="eth0" netns="/var/run/netns/cni-9d531292-e474-5e69-ba12-13a84b85b3ea" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.307 [INFO][4573] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.307 [INFO][4573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.338 [INFO][4600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.339 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.339 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.350 [WARNING][4600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.350 [INFO][4600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.352 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:19.357791 containerd[1436]: 2025-07-07 06:08:19.354 [INFO][4573] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:19.358804 containerd[1436]: time="2025-07-07T06:08:19.358325166Z" level=info msg="TearDown network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\" successfully" Jul 7 06:08:19.358804 containerd[1436]: time="2025-07-07T06:08:19.358354286Z" level=info msg="StopPodSandbox for \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\" returns successfully" Jul 7 06:08:19.358854 kubelet[2468]: E0707 06:08:19.358752 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:19.360116 containerd[1436]: time="2025-07-07T06:08:19.359377767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zp2bz,Uid:27409c61-b572-4164-bf93-9ca33f0fe80a,Namespace:kube-system,Attempt:1,}" Jul 7 06:08:19.360799 systemd[1]: run-netns-cni\x2d9d531292\x2de474\x2d5e69\x2dba12\x2d13a84b85b3ea.mount: Deactivated successfully. Jul 7 06:08:19.379847 systemd[1]: Started cri-containerd-5fb90454cf224725f5a1f26d4c49a54a52edc47862d3e06258a87f6989be9c94.scope - libcontainer container 5fb90454cf224725f5a1f26d4c49a54a52edc47862d3e06258a87f6989be9c94. Jul 7 06:08:19.415341 containerd[1436]: time="2025-07-07T06:08:19.415288140Z" level=info msg="StartContainer for \"5fb90454cf224725f5a1f26d4c49a54a52edc47862d3e06258a87f6989be9c94\" returns successfully" Jul 7 06:08:19.461788 systemd-networkd[1373]: cali5a293864f21: Gained IPv6LL Jul 7 06:08:19.481074 systemd-networkd[1373]: cali96910a68687: Link UP Jul 7 06:08:19.482044 systemd-networkd[1373]: cali96910a68687: Gained carrier Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.390 [INFO][4629] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.408 [INFO][4629] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0 coredns-674b8bbfcf- kube-system 27409c61-b572-4164-bf93-9ca33f0fe80a 1039 0 2025-07-07 06:07:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zp2bz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96910a68687 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-zp2bz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zp2bz-" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.408 [INFO][4629] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-zp2bz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.431 [INFO][4656] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" HandleID="k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.431 [INFO][4656] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" HandleID="k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d810), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zp2bz", "timestamp":"2025-07-07 06:08:19.431249904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.431 [INFO][4656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.431 [INFO][4656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.431 [INFO][4656] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.442 [INFO][4656] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.448 [INFO][4656] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.453 [INFO][4656] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.456 [INFO][4656] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.459 [INFO][4656] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.459 [INFO][4656] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.461 [INFO][4656] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00 Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.466 [INFO][4656] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.473 [INFO][4656] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.474 [INFO][4656] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" host="localhost" Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.474 [INFO][4656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:19.502329 containerd[1436]: 2025-07-07 06:08:19.474 [INFO][4656] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" HandleID="k8s-pod-network.ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.503049 containerd[1436]: 2025-07-07 06:08:19.476 [INFO][4629] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-zp2bz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"27409c61-b572-4164-bf93-9ca33f0fe80a", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zp2bz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96910a68687", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:19.503049 containerd[1436]: 2025-07-07 06:08:19.476 [INFO][4629] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-zp2bz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.503049 containerd[1436]: 2025-07-07 06:08:19.476 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96910a68687 ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-zp2bz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.503049 containerd[1436]: 2025-07-07 06:08:19.480 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-zp2bz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.503049 containerd[1436]: 2025-07-07 06:08:19.482 [INFO][4629] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-zp2bz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"27409c61-b572-4164-bf93-9ca33f0fe80a", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00", Pod:"coredns-674b8bbfcf-zp2bz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96910a68687", MAC:"42:f0:c4:7f:c7:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:19.503049 containerd[1436]: 2025-07-07 06:08:19.499 [INFO][4629] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00" Namespace="kube-system" Pod="coredns-674b8bbfcf-zp2bz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:19.522011 containerd[1436]: time="2025-07-07T06:08:19.521891726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:19.522011 containerd[1436]: time="2025-07-07T06:08:19.521950406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:19.522011 containerd[1436]: time="2025-07-07T06:08:19.521965926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:19.522205 containerd[1436]: time="2025-07-07T06:08:19.522049246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:19.549940 systemd[1]: Started cri-containerd-ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00.scope - libcontainer container ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00. Jul 7 06:08:19.563291 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:08:19.595810 containerd[1436]: time="2025-07-07T06:08:19.595759064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zp2bz,Uid:27409c61-b572-4164-bf93-9ca33f0fe80a,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00\"" Jul 7 06:08:19.597356 kubelet[2468]: E0707 06:08:19.597099 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:19.600935 containerd[1436]: time="2025-07-07T06:08:19.600898945Z" level=info msg="CreateContainer within sandbox \"ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:08:19.613571 containerd[1436]: time="2025-07-07T06:08:19.613521228Z" level=info msg="CreateContainer within sandbox \"ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a64359cdfb2e634750b3b0ce937e7ba7a4c74e45f1bf099a8b22e0a6194ad4f\"" Jul 7 06:08:19.614294 containerd[1436]: time="2025-07-07T06:08:19.614003068Z" level=info msg="StartContainer for \"2a64359cdfb2e634750b3b0ce937e7ba7a4c74e45f1bf099a8b22e0a6194ad4f\"" Jul 7 06:08:19.641859 systemd[1]: Started cri-containerd-2a64359cdfb2e634750b3b0ce937e7ba7a4c74e45f1bf099a8b22e0a6194ad4f.scope - libcontainer container 2a64359cdfb2e634750b3b0ce937e7ba7a4c74e45f1bf099a8b22e0a6194ad4f. Jul 7 06:08:19.667634 containerd[1436]: time="2025-07-07T06:08:19.667590681Z" level=info msg="StartContainer for \"2a64359cdfb2e634750b3b0ce937e7ba7a4c74e45f1bf099a8b22e0a6194ad4f\" returns successfully" Jul 7 06:08:19.863800 containerd[1436]: time="2025-07-07T06:08:19.863512928Z" level=info msg="StopPodSandbox for \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\"" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.910 [INFO][4779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.911 [INFO][4779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" iface="eth0" netns="/var/run/netns/cni-44407650-3129-b996-7dac-a84ff8163e72" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.911 [INFO][4779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" iface="eth0" netns="/var/run/netns/cni-44407650-3129-b996-7dac-a84ff8163e72" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.911 [INFO][4779] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" iface="eth0" netns="/var/run/netns/cni-44407650-3129-b996-7dac-a84ff8163e72" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.912 [INFO][4779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.912 [INFO][4779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.935 [INFO][4787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.935 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.935 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.944 [WARNING][4787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.944 [INFO][4787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.945 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:19.949568 containerd[1436]: 2025-07-07 06:08:19.947 [INFO][4779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:19.950658 containerd[1436]: time="2025-07-07T06:08:19.949704949Z" level=info msg="TearDown network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\" successfully" Jul 7 06:08:19.950658 containerd[1436]: time="2025-07-07T06:08:19.949732549Z" level=info msg="StopPodSandbox for \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\" returns successfully" Jul 7 06:08:19.950658 containerd[1436]: time="2025-07-07T06:08:19.950396029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674899bd6d-llcx9,Uid:689618a0-74c2-4971-a557-2ab4b585a588,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:08:20.000383 systemd[1]: run-netns-cni\x2d44407650\x2d3129\x2db996\x2d7dac\x2da84ff8163e72.mount: Deactivated successfully. Jul 7 06:08:20.076424 systemd-networkd[1373]: cali923f4c663b7: Link UP Jul 7 06:08:20.077031 systemd-networkd[1373]: cali923f4c663b7: Gained carrier Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:19.997 [INFO][4794] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.012 [INFO][4794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0 calico-apiserver-674899bd6d- calico-apiserver 689618a0-74c2-4971-a557-2ab4b585a588 1060 0 2025-07-07 06:07:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674899bd6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-674899bd6d-llcx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali923f4c663b7 [] [] }} ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-llcx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--llcx9-" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.012 [INFO][4794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-llcx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.036 [INFO][4809] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" HandleID="k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.036 [INFO][4809] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" HandleID="k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-674899bd6d-llcx9", "timestamp":"2025-07-07 06:08:20.03628473 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.036 [INFO][4809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.036 [INFO][4809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.036 [INFO][4809] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.046 [INFO][4809] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.051 [INFO][4809] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.057 [INFO][4809] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.059 [INFO][4809] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.062 [INFO][4809] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.062 [INFO][4809] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.063 [INFO][4809] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.067 [INFO][4809] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.072 [INFO][4809] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.072 [INFO][4809] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" host="localhost" Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.072 [INFO][4809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:20.089090 containerd[1436]: 2025-07-07 06:08:20.072 [INFO][4809] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" HandleID="k8s-pod-network.4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:20.089801 containerd[1436]: 2025-07-07 06:08:20.074 [INFO][4794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-llcx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0", GenerateName:"calico-apiserver-674899bd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"689618a0-74c2-4971-a557-2ab4b585a588", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674899bd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-674899bd6d-llcx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali923f4c663b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:20.089801 containerd[1436]: 2025-07-07 06:08:20.074 [INFO][4794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-llcx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:20.089801 containerd[1436]: 2025-07-07 06:08:20.074 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali923f4c663b7 ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-llcx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:20.089801 containerd[1436]: 2025-07-07 06:08:20.077 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-llcx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:20.089801 containerd[1436]: 2025-07-07 06:08:20.078 [INFO][4794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-llcx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0", GenerateName:"calico-apiserver-674899bd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"689618a0-74c2-4971-a557-2ab4b585a588", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674899bd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e", Pod:"calico-apiserver-674899bd6d-llcx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali923f4c663b7", MAC:"da:08:26:9e:4b:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:20.089801 containerd[1436]: 2025-07-07 06:08:20.087 [INFO][4794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-llcx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:20.104207 containerd[1436]: time="2025-07-07T06:08:20.104101986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:20.104207 containerd[1436]: time="2025-07-07T06:08:20.104169266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:20.104207 containerd[1436]: time="2025-07-07T06:08:20.104186386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:20.104509 containerd[1436]: time="2025-07-07T06:08:20.104268946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:20.133924 systemd[1]: Started cri-containerd-4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e.scope - libcontainer container 4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e. Jul 7 06:08:20.157301 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:08:20.177509 containerd[1436]: time="2025-07-07T06:08:20.177419963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674899bd6d-llcx9,Uid:689618a0-74c2-4971-a557-2ab4b585a588,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e\"" Jul 7 06:08:20.229961 systemd-networkd[1373]: cali1df61e19253: Gained IPv6LL Jul 7 06:08:20.263008 containerd[1436]: time="2025-07-07T06:08:20.262875143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:20.263760 containerd[1436]: time="2025-07-07T06:08:20.263640663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 06:08:20.271119 containerd[1436]: time="2025-07-07T06:08:20.271044425Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:20.275418 containerd[1436]: time="2025-07-07T06:08:20.275355146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:20.276190 containerd[1436]: time="2025-07-07T06:08:20.276154586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 959.33379ms" Jul 7 06:08:20.276258 containerd[1436]: time="2025-07-07T06:08:20.276192346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 06:08:20.277113 containerd[1436]: time="2025-07-07T06:08:20.277071066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:08:20.281034 containerd[1436]: time="2025-07-07T06:08:20.280909547Z" level=info msg="CreateContainer within sandbox \"a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:08:20.298689 containerd[1436]: time="2025-07-07T06:08:20.298632271Z" level=info msg="CreateContainer within sandbox \"a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ff81df6d0a5e6195baf79be9e88c8c014eae487376d2d0dec690e7bf2eae374c\"" Jul 7 06:08:20.299278 containerd[1436]: time="2025-07-07T06:08:20.299238992Z" level=info msg="StartContainer for \"ff81df6d0a5e6195baf79be9e88c8c014eae487376d2d0dec690e7bf2eae374c\"" Jul 7 06:08:20.325880 systemd[1]: Started cri-containerd-ff81df6d0a5e6195baf79be9e88c8c014eae487376d2d0dec690e7bf2eae374c.scope - libcontainer container ff81df6d0a5e6195baf79be9e88c8c014eae487376d2d0dec690e7bf2eae374c. Jul 7 06:08:20.337317 kubelet[2468]: E0707 06:08:20.335834 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:20.377153 kubelet[2468]: I0707 06:08:20.376763 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zp2bz" podStartSLOduration=35.37671541 podStartE2EDuration="35.37671541s" podCreationTimestamp="2025-07-07 06:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:20.350730124 +0000 UTC m=+43.575342147" watchObservedRunningTime="2025-07-07 06:08:20.37671541 +0000 UTC m=+43.601327393" Jul 7 06:08:20.396369 kubelet[2468]: I0707 06:08:20.396027 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-695fbfc78b-c6gs2" podStartSLOduration=20.293603129 podStartE2EDuration="22.396001574s" podCreationTimestamp="2025-07-07 06:07:58 +0000 UTC" firstStartedPulling="2025-07-07 06:08:17.213060831 +0000 UTC m=+40.437672854" lastFinishedPulling="2025-07-07 06:08:19.315459276 +0000 UTC m=+42.540071299" observedRunningTime="2025-07-07 06:08:20.395719454 +0000 UTC m=+43.620331477" watchObservedRunningTime="2025-07-07 06:08:20.396001574 +0000 UTC m=+43.620613597" Jul 7 06:08:20.425424 containerd[1436]: time="2025-07-07T06:08:20.425303341Z" level=info msg="StartContainer for \"ff81df6d0a5e6195baf79be9e88c8c014eae487376d2d0dec690e7bf2eae374c\" returns successfully" Jul 7 06:08:20.759582 kubelet[2468]: I0707 06:08:20.759531 2468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:08:20.759920 kubelet[2468]: E0707 06:08:20.759893 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:20.866102 containerd[1436]: time="2025-07-07T06:08:20.865818965Z" level=info msg="StopPodSandbox for \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\"" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:20.983 [INFO][4945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:20.983 [INFO][4945] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" iface="eth0" netns="/var/run/netns/cni-facb46c0-bb49-dc98-019a-92b7c7175c4e" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:20.983 [INFO][4945] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" iface="eth0" netns="/var/run/netns/cni-facb46c0-bb49-dc98-019a-92b7c7175c4e" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:20.983 [INFO][4945] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" iface="eth0" netns="/var/run/netns/cni-facb46c0-bb49-dc98-019a-92b7c7175c4e" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:20.984 [INFO][4945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:20.984 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:21.006 [INFO][4954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:21.007 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:21.007 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:21.015 [WARNING][4954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:21.015 [INFO][4954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:21.017 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:21.020896 containerd[1436]: 2025-07-07 06:08:21.018 [INFO][4945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:21.021638 containerd[1436]: time="2025-07-07T06:08:21.021592361Z" level=info msg="TearDown network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\" successfully" Jul 7 06:08:21.021638 containerd[1436]: time="2025-07-07T06:08:21.021633721Z" level=info msg="StopPodSandbox for \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\" returns successfully" Jul 7 06:08:21.023602 systemd[1]: run-netns-cni\x2dfacb46c0\x2dbb49\x2ddc98\x2d019a\x2d92b7c7175c4e.mount: Deactivated successfully. Jul 7 06:08:21.024011 kubelet[2468]: E0707 06:08:21.023799 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:21.024366 containerd[1436]: time="2025-07-07T06:08:21.024281922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7g75c,Uid:7f20084f-0209-4ec2-bfca-cd55b8ec8924,Namespace:kube-system,Attempt:1,}" Jul 7 06:08:21.170851 systemd-networkd[1373]: calidf3ed48a1c8: Link UP Jul 7 06:08:21.172599 systemd-networkd[1373]: calidf3ed48a1c8: Gained carrier Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.084 [INFO][4967] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.097 [INFO][4967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7g75c-eth0 coredns-674b8bbfcf- kube-system 7f20084f-0209-4ec2-bfca-cd55b8ec8924 1092 0 2025-07-07 06:07:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7g75c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf3ed48a1c8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Namespace="kube-system" Pod="coredns-674b8bbfcf-7g75c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7g75c-" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.097 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Namespace="kube-system" Pod="coredns-674b8bbfcf-7g75c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.122 [INFO][4977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" HandleID="k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.122 [INFO][4977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" HandleID="k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137420), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7g75c", "timestamp":"2025-07-07 06:08:21.122422384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.122 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.122 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.122 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.135 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.141 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.147 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.149 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.151 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.151 [INFO][4977] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.153 [INFO][4977] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432 Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.157 [INFO][4977] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.163 [INFO][4977] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.163 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" host="localhost" Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.163 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:21.188342 containerd[1436]: 2025-07-07 06:08:21.163 [INFO][4977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" HandleID="k8s-pod-network.b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.188920 containerd[1436]: 2025-07-07 06:08:21.166 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Namespace="kube-system" Pod="coredns-674b8bbfcf-7g75c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7g75c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7f20084f-0209-4ec2-bfca-cd55b8ec8924", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7g75c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf3ed48a1c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:21.188920 containerd[1436]: 2025-07-07 06:08:21.167 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Namespace="kube-system" Pod="coredns-674b8bbfcf-7g75c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.188920 containerd[1436]: 2025-07-07 06:08:21.167 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf3ed48a1c8 ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Namespace="kube-system" Pod="coredns-674b8bbfcf-7g75c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.188920 containerd[1436]: 2025-07-07 06:08:21.172 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Namespace="kube-system" Pod="coredns-674b8bbfcf-7g75c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.188920 containerd[1436]: 2025-07-07 06:08:21.173 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Namespace="kube-system" Pod="coredns-674b8bbfcf-7g75c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7g75c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7f20084f-0209-4ec2-bfca-cd55b8ec8924", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432", Pod:"coredns-674b8bbfcf-7g75c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf3ed48a1c8", MAC:"a6:54:44:7d:83:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:21.188920 containerd[1436]: 2025-07-07 06:08:21.185 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432" Namespace="kube-system" Pod="coredns-674b8bbfcf-7g75c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:21.207063 containerd[1436]: time="2025-07-07T06:08:21.206823923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:21.207770 containerd[1436]: time="2025-07-07T06:08:21.207327564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:21.207770 containerd[1436]: time="2025-07-07T06:08:21.207348364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:21.207770 containerd[1436]: time="2025-07-07T06:08:21.207451564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:21.231890 systemd[1]: Started cri-containerd-b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432.scope - libcontainer container b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432. Jul 7 06:08:21.246320 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:08:21.285766 containerd[1436]: time="2025-07-07T06:08:21.284357581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7g75c,Uid:7f20084f-0209-4ec2-bfca-cd55b8ec8924,Namespace:kube-system,Attempt:1,} returns sandbox id \"b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432\"" Jul 7 06:08:21.285913 kubelet[2468]: E0707 06:08:21.285278 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:21.317040 containerd[1436]: time="2025-07-07T06:08:21.313811028Z" level=info msg="CreateContainer within sandbox \"b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:08:21.378659 kubelet[2468]: E0707 06:08:21.378272 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:21.379620 kubelet[2468]: I0707 06:08:21.379336 2468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:08:21.379884 kubelet[2468]: E0707 06:08:21.379827 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:21.412216 containerd[1436]: time="2025-07-07T06:08:21.412079050Z" level=info msg="CreateContainer within sandbox \"b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e537a34b8bfbde851529fd7d529a9d4158c246ebc664391e89d5d45753b736b6\"" Jul 7 06:08:21.412937 containerd[1436]: time="2025-07-07T06:08:21.412866810Z" level=info msg="StartContainer for \"e537a34b8bfbde851529fd7d529a9d4158c246ebc664391e89d5d45753b736b6\"" Jul 7 06:08:21.443839 systemd[1]: Started cri-containerd-e537a34b8bfbde851529fd7d529a9d4158c246ebc664391e89d5d45753b736b6.scope - libcontainer container e537a34b8bfbde851529fd7d529a9d4158c246ebc664391e89d5d45753b736b6. Jul 7 06:08:21.445971 systemd-networkd[1373]: cali96910a68687: Gained IPv6LL Jul 7 06:08:21.493606 containerd[1436]: time="2025-07-07T06:08:21.493553629Z" level=info msg="StartContainer for \"e537a34b8bfbde851529fd7d529a9d4158c246ebc664391e89d5d45753b736b6\" returns successfully" Jul 7 06:08:21.774732 kernel: bpftool[5136]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 06:08:21.864522 containerd[1436]: time="2025-07-07T06:08:21.864479874Z" level=info msg="StopPodSandbox for \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\"" Jul 7 06:08:21.958035 systemd-networkd[1373]: cali923f4c663b7: Gained IPv6LL Jul 7 06:08:21.970929 systemd-networkd[1373]: vxlan.calico: Link UP Jul 7 06:08:21.970941 systemd-networkd[1373]: vxlan.calico: Gained carrier Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:21.951 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:21.951 [INFO][5148] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" iface="eth0" netns="/var/run/netns/cni-cf48b98e-b0d0-5fa0-1207-fc18ef80bd48" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:21.952 [INFO][5148] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" iface="eth0" netns="/var/run/netns/cni-cf48b98e-b0d0-5fa0-1207-fc18ef80bd48" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:21.953 [INFO][5148] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" iface="eth0" netns="/var/run/netns/cni-cf48b98e-b0d0-5fa0-1207-fc18ef80bd48" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:21.953 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:21.953 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:21.999 [INFO][5177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:22.000 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:22.002 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:22.012 [WARNING][5177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:22.012 [INFO][5177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:22.022 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:22.032824 containerd[1436]: 2025-07-07 06:08:22.029 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:22.036856 containerd[1436]: time="2025-07-07T06:08:22.036722193Z" level=info msg="TearDown network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\" successfully" Jul 7 06:08:22.036856 containerd[1436]: time="2025-07-07T06:08:22.036761633Z" level=info msg="StopPodSandbox for \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\" returns successfully" Jul 7 06:08:22.038066 systemd[1]: run-netns-cni\x2dcf48b98e\x2db0d0\x2d5fa0\x2d1207\x2dfc18ef80bd48.mount: Deactivated successfully. Jul 7 06:08:22.039888 containerd[1436]: time="2025-07-07T06:08:22.039837874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674899bd6d-nhxd2,Uid:467d15f7-ca34-419b-8788-314a876c49ce,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:08:22.183251 systemd-networkd[1373]: calic4801b87121: Link UP Jul 7 06:08:22.184945 systemd-networkd[1373]: calic4801b87121: Gained carrier Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.099 [INFO][5207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0 calico-apiserver-674899bd6d- calico-apiserver 467d15f7-ca34-419b-8788-314a876c49ce 1111 0 2025-07-07 06:07:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674899bd6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-674899bd6d-nhxd2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic4801b87121 [] [] }} ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-nhxd2" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.100 [INFO][5207] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-nhxd2" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.131 [INFO][5221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" HandleID="k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.131 [INFO][5221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" HandleID="k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-674899bd6d-nhxd2", "timestamp":"2025-07-07 06:08:22.131450894 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.131 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.131 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.131 [INFO][5221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.142 [INFO][5221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.148 [INFO][5221] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.155 [INFO][5221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.157 [INFO][5221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.159 [INFO][5221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.159 [INFO][5221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.163 [INFO][5221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.167 [INFO][5221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.174 [INFO][5221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.174 [INFO][5221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" host="localhost" Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.174 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:22.204552 containerd[1436]: 2025-07-07 06:08:22.174 [INFO][5221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" HandleID="k8s-pod-network.1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.205602 containerd[1436]: 2025-07-07 06:08:22.180 [INFO][5207] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-nhxd2" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0", GenerateName:"calico-apiserver-674899bd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"467d15f7-ca34-419b-8788-314a876c49ce", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674899bd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-674899bd6d-nhxd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4801b87121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:22.205602 containerd[1436]: 2025-07-07 06:08:22.180 [INFO][5207] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-nhxd2" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.205602 containerd[1436]: 2025-07-07 06:08:22.180 [INFO][5207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4801b87121 ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-nhxd2" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.205602 containerd[1436]: 2025-07-07 06:08:22.184 [INFO][5207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-nhxd2" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.205602 containerd[1436]: 2025-07-07 06:08:22.184 [INFO][5207] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-nhxd2" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0", GenerateName:"calico-apiserver-674899bd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"467d15f7-ca34-419b-8788-314a876c49ce", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674899bd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d", Pod:"calico-apiserver-674899bd6d-nhxd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4801b87121", MAC:"fa:0b:7a:b4:5f:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:22.205602 containerd[1436]: 2025-07-07 06:08:22.198 [INFO][5207] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d" Namespace="calico-apiserver" Pod="calico-apiserver-674899bd6d-nhxd2" WorkloadEndpoint="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:22.252342 containerd[1436]: time="2025-07-07T06:08:22.252248441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:22.252342 containerd[1436]: time="2025-07-07T06:08:22.252323761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:22.252775 containerd[1436]: time="2025-07-07T06:08:22.252340641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:22.252775 containerd[1436]: time="2025-07-07T06:08:22.252418881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:22.278691 systemd[1]: Started cri-containerd-1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d.scope - libcontainer container 1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d. Jul 7 06:08:22.295849 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:08:22.330923 containerd[1436]: time="2025-07-07T06:08:22.330881578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674899bd6d-nhxd2,Uid:467d15f7-ca34-419b-8788-314a876c49ce,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d\"" Jul 7 06:08:22.383744 kubelet[2468]: E0707 06:08:22.383707 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:22.386499 kubelet[2468]: E0707 06:08:22.386312 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:22.424941 kubelet[2468]: I0707 06:08:22.424820 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7g75c" podStartSLOduration=37.424802279 podStartE2EDuration="37.424802279s" podCreationTimestamp="2025-07-07 06:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:22.405332075 +0000 UTC m=+45.629944098" watchObservedRunningTime="2025-07-07 06:08:22.424802279 +0000 UTC m=+45.649414302" Jul 7 06:08:22.512836 containerd[1436]: time="2025-07-07T06:08:22.512784139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:22.513228 containerd[1436]: time="2025-07-07T06:08:22.513202819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 06:08:22.514332 containerd[1436]: time="2025-07-07T06:08:22.514237139Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:22.516380 containerd[1436]: time="2025-07-07T06:08:22.516338620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:22.518139 containerd[1436]: time="2025-07-07T06:08:22.518089300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.240977954s" Jul 7 06:08:22.518139 containerd[1436]: time="2025-07-07T06:08:22.518129220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 06:08:22.519525 containerd[1436]: time="2025-07-07T06:08:22.519487580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:08:22.524858 containerd[1436]: time="2025-07-07T06:08:22.524804821Z" level=info msg="CreateContainer within sandbox \"c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:08:22.539294 containerd[1436]: time="2025-07-07T06:08:22.539239105Z" level=info msg="CreateContainer within sandbox \"c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"57116a1835f64cdf4b01182763e8def70ad1df91e1f636fb8f5821ad507917a1\"" Jul 7 06:08:22.539832 containerd[1436]: time="2025-07-07T06:08:22.539795705Z" level=info msg="StartContainer for \"57116a1835f64cdf4b01182763e8def70ad1df91e1f636fb8f5821ad507917a1\"" Jul 7 06:08:22.563847 systemd[1]: Started cri-containerd-57116a1835f64cdf4b01182763e8def70ad1df91e1f636fb8f5821ad507917a1.scope - libcontainer container 57116a1835f64cdf4b01182763e8def70ad1df91e1f636fb8f5821ad507917a1. Jul 7 06:08:22.598859 systemd-networkd[1373]: calidf3ed48a1c8: Gained IPv6LL Jul 7 06:08:22.602676 containerd[1436]: time="2025-07-07T06:08:22.600108158Z" level=info msg="StartContainer for \"57116a1835f64cdf4b01182763e8def70ad1df91e1f636fb8f5821ad507917a1\" returns successfully" Jul 7 06:08:22.983192 kubelet[2468]: I0707 06:08:22.982761 2468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:08:23.111343 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Jul 7 06:08:23.391012 kubelet[2468]: E0707 06:08:23.390906 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:23.442947 kubelet[2468]: I0707 06:08:23.442811 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-cfb5m" podStartSLOduration=22.345061499 podStartE2EDuration="26.442795263s" podCreationTimestamp="2025-07-07 06:07:57 +0000 UTC" firstStartedPulling="2025-07-07 06:08:18.421132456 +0000 UTC m=+41.645744479" lastFinishedPulling="2025-07-07 06:08:22.51886622 +0000 UTC m=+45.743478243" observedRunningTime="2025-07-07 06:08:23.440900743 +0000 UTC m=+46.665512766" watchObservedRunningTime="2025-07-07 06:08:23.442795263 +0000 UTC m=+46.667407286" Jul 7 06:08:23.621869 systemd-networkd[1373]: calic4801b87121: Gained IPv6LL Jul 7 06:08:23.825476 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:51466.service - OpenSSH per-connection server daemon (10.0.0.1:51466). Jul 7 06:08:23.908654 containerd[1436]: time="2025-07-07T06:08:23.908591884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:23.910077 containerd[1436]: time="2025-07-07T06:08:23.909891364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 06:08:23.911213 containerd[1436]: time="2025-07-07T06:08:23.911166924Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:23.913738 containerd[1436]: time="2025-07-07T06:08:23.913699765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:23.921297 containerd[1436]: time="2025-07-07T06:08:23.921260527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.401740187s" Jul 7 06:08:23.921700 containerd[1436]: time="2025-07-07T06:08:23.921585367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 06:08:23.923435 containerd[1436]: time="2025-07-07T06:08:23.923199767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:08:23.927843 containerd[1436]: time="2025-07-07T06:08:23.927804768Z" level=info msg="CreateContainer within sandbox \"4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:08:23.931033 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 51466 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:23.932939 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:23.942715 containerd[1436]: time="2025-07-07T06:08:23.942212971Z" level=info msg="CreateContainer within sandbox \"4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7064ee58246aeac5c555be9394b2e8a235a6313b819dabee62e326228798b7d1\"" Jul 7 06:08:23.942363 systemd-logind[1424]: New session 9 of user core. Jul 7 06:08:23.943806 containerd[1436]: time="2025-07-07T06:08:23.943281171Z" level=info msg="StartContainer for \"7064ee58246aeac5c555be9394b2e8a235a6313b819dabee62e326228798b7d1\"" Jul 7 06:08:23.947822 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:08:23.973776 systemd[1]: Started cri-containerd-7064ee58246aeac5c555be9394b2e8a235a6313b819dabee62e326228798b7d1.scope - libcontainer container 7064ee58246aeac5c555be9394b2e8a235a6313b819dabee62e326228798b7d1. Jul 7 06:08:24.061581 containerd[1436]: time="2025-07-07T06:08:24.061524837Z" level=info msg="StartContainer for \"7064ee58246aeac5c555be9394b2e8a235a6313b819dabee62e326228798b7d1\" returns successfully" Jul 7 06:08:24.228802 sshd[5418]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:24.232031 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:51466.service: Deactivated successfully. Jul 7 06:08:24.234629 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:08:24.235692 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:08:24.237402 systemd-logind[1424]: Removed session 9. Jul 7 06:08:24.393970 kubelet[2468]: E0707 06:08:24.393933 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:25.120930 containerd[1436]: time="2025-07-07T06:08:25.120877619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:25.121408 containerd[1436]: time="2025-07-07T06:08:25.121343459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 06:08:25.130761 containerd[1436]: time="2025-07-07T06:08:25.130716741Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:25.134385 containerd[1436]: time="2025-07-07T06:08:25.134353222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:25.135038 containerd[1436]: time="2025-07-07T06:08:25.134958862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.211517015s" Jul 7 06:08:25.135038 containerd[1436]: time="2025-07-07T06:08:25.134986342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 06:08:25.136003 containerd[1436]: time="2025-07-07T06:08:25.135969462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:08:25.139840 containerd[1436]: time="2025-07-07T06:08:25.139729543Z" level=info msg="CreateContainer within sandbox \"a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:08:25.152709 containerd[1436]: time="2025-07-07T06:08:25.152645226Z" level=info msg="CreateContainer within sandbox \"a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8974a1b9157c8a3df4e0c1b7c0c332b542af8f21b276211b5d43df0a576cfaa8\"" Jul 7 06:08:25.153717 containerd[1436]: time="2025-07-07T06:08:25.153351306Z" level=info msg="StartContainer for \"8974a1b9157c8a3df4e0c1b7c0c332b542af8f21b276211b5d43df0a576cfaa8\"" Jul 7 06:08:25.187834 systemd[1]: Started cri-containerd-8974a1b9157c8a3df4e0c1b7c0c332b542af8f21b276211b5d43df0a576cfaa8.scope - libcontainer container 8974a1b9157c8a3df4e0c1b7c0c332b542af8f21b276211b5d43df0a576cfaa8. Jul 7 06:08:25.228825 containerd[1436]: time="2025-07-07T06:08:25.228772041Z" level=info msg="StartContainer for \"8974a1b9157c8a3df4e0c1b7c0c332b542af8f21b276211b5d43df0a576cfaa8\" returns successfully" Jul 7 06:08:25.369159 containerd[1436]: time="2025-07-07T06:08:25.369108590Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:25.369715 containerd[1436]: time="2025-07-07T06:08:25.369661870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:08:25.371961 containerd[1436]: time="2025-07-07T06:08:25.371866671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 235.857489ms" Jul 7 06:08:25.371961 containerd[1436]: time="2025-07-07T06:08:25.371904231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 06:08:25.375663 containerd[1436]: time="2025-07-07T06:08:25.375622711Z" level=info msg="CreateContainer within sandbox \"1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:08:25.385957 containerd[1436]: time="2025-07-07T06:08:25.385914434Z" level=info msg="CreateContainer within sandbox \"1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6814e86e69ed98caa70c920e247890b4006f802e58ee929440f44550cea64bd2\"" Jul 7 06:08:25.386378 containerd[1436]: time="2025-07-07T06:08:25.386320194Z" level=info msg="StartContainer for \"6814e86e69ed98caa70c920e247890b4006f802e58ee929440f44550cea64bd2\"" Jul 7 06:08:25.412061 kubelet[2468]: I0707 06:08:25.412003 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-674899bd6d-llcx9" podStartSLOduration=26.668854755 podStartE2EDuration="30.411987119s" podCreationTimestamp="2025-07-07 06:07:55 +0000 UTC" firstStartedPulling="2025-07-07 06:08:20.179438003 +0000 UTC m=+43.404050026" lastFinishedPulling="2025-07-07 06:08:23.922570407 +0000 UTC m=+47.147182390" observedRunningTime="2025-07-07 06:08:24.405311429 +0000 UTC m=+47.629923452" watchObservedRunningTime="2025-07-07 06:08:25.411987119 +0000 UTC m=+48.636599142" Jul 7 06:08:25.412884 kubelet[2468]: I0707 06:08:25.412376 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fwndf" podStartSLOduration=20.630585137 podStartE2EDuration="27.412369639s" podCreationTimestamp="2025-07-07 06:07:58 +0000 UTC" firstStartedPulling="2025-07-07 06:08:18.3540818 +0000 UTC m=+41.578693823" lastFinishedPulling="2025-07-07 06:08:25.135866302 +0000 UTC m=+48.360478325" observedRunningTime="2025-07-07 06:08:25.410909319 +0000 UTC m=+48.635521302" watchObservedRunningTime="2025-07-07 06:08:25.412369639 +0000 UTC m=+48.636981622" Jul 7 06:08:25.417828 systemd[1]: Started cri-containerd-6814e86e69ed98caa70c920e247890b4006f802e58ee929440f44550cea64bd2.scope - libcontainer container 6814e86e69ed98caa70c920e247890b4006f802e58ee929440f44550cea64bd2. Jul 7 06:08:25.471911 containerd[1436]: time="2025-07-07T06:08:25.471859371Z" level=info msg="StartContainer for \"6814e86e69ed98caa70c920e247890b4006f802e58ee929440f44550cea64bd2\" returns successfully" Jul 7 06:08:25.952111 kubelet[2468]: I0707 06:08:25.952069 2468 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:08:25.967967 kubelet[2468]: I0707 06:08:25.967931 2468 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:08:26.844203 kubelet[2468]: I0707 06:08:26.843860 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-674899bd6d-nhxd2" podStartSLOduration=28.808351517 podStartE2EDuration="31.843844848s" podCreationTimestamp="2025-07-07 06:07:55 +0000 UTC" firstStartedPulling="2025-07-07 06:08:22.33710978 +0000 UTC m=+45.561721803" lastFinishedPulling="2025-07-07 06:08:25.372603111 +0000 UTC m=+48.597215134" observedRunningTime="2025-07-07 06:08:26.426978845 +0000 UTC m=+49.651590868" watchObservedRunningTime="2025-07-07 06:08:26.843844848 +0000 UTC m=+50.068456871" Jul 7 06:08:27.173040 kubelet[2468]: I0707 06:08:27.172886 2468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:08:27.295983 systemd[1]: run-containerd-runc-k8s.io-23a8fad5caa33d3f31cf622abca621dc676f06037d2c6cceb9c57238b1249342-runc.eZcGVs.mount: Deactivated successfully. Jul 7 06:08:29.238873 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:51474.service - OpenSSH per-connection server daemon (10.0.0.1:51474). Jul 7 06:08:29.320167 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 51474 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:29.323621 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:29.330016 systemd-logind[1424]: New session 10 of user core. Jul 7 06:08:29.339029 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:08:29.628475 sshd[5674]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:29.639202 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:51474.service: Deactivated successfully. Jul 7 06:08:29.640635 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:08:29.641894 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:08:29.642983 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:51486.service - OpenSSH per-connection server daemon (10.0.0.1:51486). Jul 7 06:08:29.644688 systemd-logind[1424]: Removed session 10. Jul 7 06:08:29.676401 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 51486 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:29.677789 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:29.681795 systemd-logind[1424]: New session 11 of user core. Jul 7 06:08:29.692868 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:08:29.916458 sshd[5694]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:29.925312 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:51486.service: Deactivated successfully. Jul 7 06:08:29.926941 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:08:29.928568 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:08:29.935426 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:51502.service - OpenSSH per-connection server daemon (10.0.0.1:51502). Jul 7 06:08:29.936946 systemd-logind[1424]: Removed session 11. Jul 7 06:08:29.971543 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 51502 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:29.972800 sshd[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:29.977265 systemd-logind[1424]: New session 12 of user core. Jul 7 06:08:29.980809 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:08:30.130660 sshd[5706]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:30.134617 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:51502.service: Deactivated successfully. Jul 7 06:08:30.136338 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:08:30.138164 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:08:30.139325 systemd-logind[1424]: Removed session 12. Jul 7 06:08:35.145893 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:58816.service - OpenSSH per-connection server daemon (10.0.0.1:58816). Jul 7 06:08:35.186985 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 58816 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:35.188398 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:35.192814 systemd-logind[1424]: New session 13 of user core. Jul 7 06:08:35.203837 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:08:35.391037 sshd[5729]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:35.404251 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:58816.service: Deactivated successfully. Jul 7 06:08:35.406196 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:08:35.408466 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:08:35.419264 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:58832.service - OpenSSH per-connection server daemon (10.0.0.1:58832). Jul 7 06:08:35.421039 systemd-logind[1424]: Removed session 13. Jul 7 06:08:35.455119 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 58832 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:35.456447 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:35.460404 systemd-logind[1424]: New session 14 of user core. Jul 7 06:08:35.472864 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:08:35.675470 sshd[5745]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:35.683705 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:58832.service: Deactivated successfully. Jul 7 06:08:35.686171 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:08:35.687958 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:08:35.689727 systemd-logind[1424]: Removed session 14. Jul 7 06:08:35.690764 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:58844.service - OpenSSH per-connection server daemon (10.0.0.1:58844). Jul 7 06:08:35.729952 sshd[5758]: Accepted publickey for core from 10.0.0.1 port 58844 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:35.731434 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:35.735855 systemd-logind[1424]: New session 15 of user core. Jul 7 06:08:35.746865 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:08:36.498186 sshd[5758]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:36.513208 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:58844.service: Deactivated successfully. Jul 7 06:08:36.514826 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:08:36.519411 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:08:36.533288 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:58846.service - OpenSSH per-connection server daemon (10.0.0.1:58846). Jul 7 06:08:36.536423 systemd-logind[1424]: Removed session 15. Jul 7 06:08:36.568572 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 58846 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:36.569932 sshd[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:36.573876 systemd-logind[1424]: New session 16 of user core. Jul 7 06:08:36.584834 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:08:36.859587 containerd[1436]: time="2025-07-07T06:08:36.857181433Z" level=info msg="StopPodSandbox for \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\"" Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.913 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"9eb3733b-fa70-428d-847e-d3206b3573f6", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d", Pod:"goldmane-768f4c5c69-cfb5m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a293864f21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.913 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.913 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" iface="eth0" netns="" Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.913 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.913 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.950 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.951 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.951 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.965 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.965 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.967 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:36.975521 containerd[1436]: 2025-07-07 06:08:36.970 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:36.975521 containerd[1436]: time="2025-07-07T06:08:36.972188651Z" level=info msg="TearDown network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\" successfully" Jul 7 06:08:36.975521 containerd[1436]: time="2025-07-07T06:08:36.972215131Z" level=info msg="StopPodSandbox for \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\" returns successfully" Jul 7 06:08:36.975521 containerd[1436]: time="2025-07-07T06:08:36.972880491Z" level=info msg="RemovePodSandbox for \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\"" Jul 7 06:08:36.979635 containerd[1436]: time="2025-07-07T06:08:36.979585852Z" level=info msg="Forcibly stopping sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\"" Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.023 [WARNING][5828] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"9eb3733b-fa70-428d-847e-d3206b3573f6", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c54afae2f1676d2fffd723f8c9a0073124b1574682924ef6cd069644b6fd719d", Pod:"goldmane-768f4c5c69-cfb5m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a293864f21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.023 [INFO][5828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.023 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" iface="eth0" netns="" Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.023 [INFO][5828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.023 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.049 [INFO][5836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.049 [INFO][5836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.050 [INFO][5836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.062 [WARNING][5836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.062 [INFO][5836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" HandleID="k8s-pod-network.116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Workload="localhost-k8s-goldmane--768f4c5c69--cfb5m-eth0" Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.065 [INFO][5836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.073548 containerd[1436]: 2025-07-07 06:08:37.070 [INFO][5828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520" Jul 7 06:08:37.074020 containerd[1436]: time="2025-07-07T06:08:37.073587546Z" level=info msg="TearDown network for sandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\" successfully" Jul 7 06:08:37.113171 containerd[1436]: time="2025-07-07T06:08:37.112864352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:08:37.113171 containerd[1436]: time="2025-07-07T06:08:37.112947712Z" level=info msg="RemovePodSandbox \"116d596d7c558643d8837e2b4e92dd4b3ef0c85838cf7782610b5969e443a520\" returns successfully" Jul 7 06:08:37.114287 containerd[1436]: time="2025-07-07T06:08:37.114262672Z" level=info msg="StopPodSandbox for \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\"" Jul 7 06:08:37.163330 sshd[5780]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:37.174705 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:58846.service: Deactivated successfully. Jul 7 06:08:37.176561 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:08:37.180284 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:08:37.191024 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:58852.service - OpenSSH per-connection server daemon (10.0.0.1:58852). Jul 7 06:08:37.191969 systemd-logind[1424]: Removed session 16. Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.158 [WARNING][5853] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fwndf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4408009-2167-498a-9cbf-5110d3e01355", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d", Pod:"csi-node-driver-fwndf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1df61e19253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.159 [INFO][5853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.159 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" iface="eth0" netns="" Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.159 [INFO][5853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.159 [INFO][5853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.190 [INFO][5861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.190 [INFO][5861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.191 [INFO][5861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.203 [WARNING][5861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.204 [INFO][5861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.205 [INFO][5861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.209292 containerd[1436]: 2025-07-07 06:08:37.207 [INFO][5853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:37.209292 containerd[1436]: time="2025-07-07T06:08:37.209160487Z" level=info msg="TearDown network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\" successfully" Jul 7 06:08:37.209292 containerd[1436]: time="2025-07-07T06:08:37.209187807Z" level=info msg="StopPodSandbox for \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\" returns successfully" Jul 7 06:08:37.209884 containerd[1436]: time="2025-07-07T06:08:37.209609367Z" level=info msg="RemovePodSandbox for \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\"" Jul 7 06:08:37.209884 containerd[1436]: time="2025-07-07T06:08:37.209638927Z" level=info msg="Forcibly stopping sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\"" Jul 7 06:08:37.226982 sshd[5871]: Accepted publickey for core from 10.0.0.1 port 58852 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:37.228662 sshd[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:37.233111 systemd-logind[1424]: New session 17 of user core. Jul 7 06:08:37.245839 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.248 [WARNING][5884] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fwndf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4408009-2167-498a-9cbf-5110d3e01355", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a415dae39d49a85578923f8d9f476cc1ab15ef2a0bc14aa0541865fc846f371d", Pod:"csi-node-driver-fwndf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1df61e19253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.249 [INFO][5884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.249 [INFO][5884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" iface="eth0" netns="" Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.249 [INFO][5884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.249 [INFO][5884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.268 [INFO][5894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.268 [INFO][5894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.268 [INFO][5894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.278 [WARNING][5894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.278 [INFO][5894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" HandleID="k8s-pod-network.90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Workload="localhost-k8s-csi--node--driver--fwndf-eth0" Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.279 [INFO][5894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.283060 containerd[1436]: 2025-07-07 06:08:37.281 [INFO][5884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378" Jul 7 06:08:37.285348 containerd[1436]: time="2025-07-07T06:08:37.283533978Z" level=info msg="TearDown network for sandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\" successfully" Jul 7 06:08:37.297191 containerd[1436]: time="2025-07-07T06:08:37.297133420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:08:37.297309 containerd[1436]: time="2025-07-07T06:08:37.297207340Z" level=info msg="RemovePodSandbox \"90c27e1bdf763e84d127cc74dea8e0420bd1cb740184882b9fc87035ebf20378\" returns successfully" Jul 7 06:08:37.297724 containerd[1436]: time="2025-07-07T06:08:37.297698700Z" level=info msg="StopPodSandbox for \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\"" Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.333 [WARNING][5918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0", GenerateName:"calico-kube-controllers-695fbfc78b-", Namespace:"calico-system", SelfLink:"", UID:"086322d3-4360-4797-8169-4da48ff64f3a", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695fbfc78b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5", Pod:"calico-kube-controllers-695fbfc78b-c6gs2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25b919a6447", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.334 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.334 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" iface="eth0" netns="" Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.334 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.334 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.353 [INFO][5928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.353 [INFO][5928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.353 [INFO][5928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.362 [WARNING][5928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.362 [INFO][5928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.363 [INFO][5928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.367365 containerd[1436]: 2025-07-07 06:08:37.365 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:37.367365 containerd[1436]: time="2025-07-07T06:08:37.367338830Z" level=info msg="TearDown network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\" successfully" Jul 7 06:08:37.368492 containerd[1436]: time="2025-07-07T06:08:37.367382150Z" level=info msg="StopPodSandbox for \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\" returns successfully" Jul 7 06:08:37.368987 containerd[1436]: time="2025-07-07T06:08:37.368956791Z" level=info msg="RemovePodSandbox for \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\"" Jul 7 06:08:37.369089 containerd[1436]: time="2025-07-07T06:08:37.369073511Z" level=info msg="Forcibly stopping sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\"" Jul 7 06:08:37.431029 sshd[5871]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:37.437052 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:58852.service: Deactivated successfully. Jul 7 06:08:37.440640 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:08:37.441562 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:08:37.442566 systemd-logind[1424]: Removed session 17. Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.414 [WARNING][5946] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0", GenerateName:"calico-kube-controllers-695fbfc78b-", Namespace:"calico-system", SelfLink:"", UID:"086322d3-4360-4797-8169-4da48ff64f3a", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695fbfc78b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f48e9fc2a99b41de17828fe3fe2efc0e7fd42dd0950a8bfce3f75217c23797d5", Pod:"calico-kube-controllers-695fbfc78b-c6gs2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25b919a6447", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.415 [INFO][5946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.415 [INFO][5946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" iface="eth0" netns="" Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.415 [INFO][5946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.415 [INFO][5946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.434 [INFO][5955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.434 [INFO][5955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.434 [INFO][5955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.444 [WARNING][5955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.444 [INFO][5955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" HandleID="k8s-pod-network.c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Workload="localhost-k8s-calico--kube--controllers--695fbfc78b--c6gs2-eth0" Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.445 [INFO][5955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.449386 containerd[1436]: 2025-07-07 06:08:37.447 [INFO][5946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339" Jul 7 06:08:37.449751 containerd[1436]: time="2025-07-07T06:08:37.449424163Z" level=info msg="TearDown network for sandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\" successfully" Jul 7 06:08:37.452473 containerd[1436]: time="2025-07-07T06:08:37.452438963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:08:37.452524 containerd[1436]: time="2025-07-07T06:08:37.452496243Z" level=info msg="RemovePodSandbox \"c8aeee12e432c74db78b94e25102ed2b9cb91f32845fe30ce0770877a1cf7339\" returns successfully" Jul 7 06:08:37.453019 containerd[1436]: time="2025-07-07T06:08:37.452976563Z" level=info msg="StopPodSandbox for \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\"" Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.485 [WARNING][5977] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"27409c61-b572-4164-bf93-9ca33f0fe80a", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00", Pod:"coredns-674b8bbfcf-zp2bz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96910a68687", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.485 [INFO][5977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.485 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" iface="eth0" netns="" Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.485 [INFO][5977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.485 [INFO][5977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.504 [INFO][5986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.505 [INFO][5986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.505 [INFO][5986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.513 [WARNING][5986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.513 [INFO][5986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.515 [INFO][5986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.520358 containerd[1436]: 2025-07-07 06:08:37.518 [INFO][5977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:37.520800 containerd[1436]: time="2025-07-07T06:08:37.520399773Z" level=info msg="TearDown network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\" successfully" Jul 7 06:08:37.520800 containerd[1436]: time="2025-07-07T06:08:37.520422973Z" level=info msg="StopPodSandbox for \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\" returns successfully" Jul 7 06:08:37.520953 containerd[1436]: time="2025-07-07T06:08:37.520907653Z" level=info msg="RemovePodSandbox for \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\"" Jul 7 06:08:37.520992 containerd[1436]: time="2025-07-07T06:08:37.520948333Z" level=info msg="Forcibly stopping sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\"" Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.554 [WARNING][6005] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"27409c61-b572-4164-bf93-9ca33f0fe80a", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac62b0d10a850eb6592495cb7f33ed1ec95609e05cdd367608ce121d36fe4d00", Pod:"coredns-674b8bbfcf-zp2bz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96910a68687", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.554 [INFO][6005] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.554 [INFO][6005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" iface="eth0" netns="" Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.554 [INFO][6005] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.554 [INFO][6005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.576 [INFO][6013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.576 [INFO][6013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.576 [INFO][6013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.585 [WARNING][6013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.585 [INFO][6013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" HandleID="k8s-pod-network.3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Workload="localhost-k8s-coredns--674b8bbfcf--zp2bz-eth0" Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.586 [INFO][6013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.590246 containerd[1436]: 2025-07-07 06:08:37.588 [INFO][6005] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4" Jul 7 06:08:37.590653 containerd[1436]: time="2025-07-07T06:08:37.590276104Z" level=info msg="TearDown network for sandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\" successfully" Jul 7 06:08:37.593157 containerd[1436]: time="2025-07-07T06:08:37.593118144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:08:37.593208 containerd[1436]: time="2025-07-07T06:08:37.593190864Z" level=info msg="RemovePodSandbox \"3d5755a9f87dd5facdf97f0f7c914111e23874c6cc8a962a8cdfa990a99496a4\" returns successfully" Jul 7 06:08:37.593846 containerd[1436]: time="2025-07-07T06:08:37.593818544Z" level=info msg="StopPodSandbox for \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\"" Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.627 [WARNING][6031] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0", GenerateName:"calico-apiserver-674899bd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"467d15f7-ca34-419b-8788-314a876c49ce", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674899bd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d", Pod:"calico-apiserver-674899bd6d-nhxd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4801b87121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.627 [INFO][6031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.627 [INFO][6031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" iface="eth0" netns="" Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.627 [INFO][6031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.627 [INFO][6031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.645 [INFO][6040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.645 [INFO][6040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.645 [INFO][6040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.654 [WARNING][6040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.655 [INFO][6040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.656 [INFO][6040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.660062 containerd[1436]: 2025-07-07 06:08:37.658 [INFO][6031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:37.660062 containerd[1436]: time="2025-07-07T06:08:37.660035274Z" level=info msg="TearDown network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\" successfully" Jul 7 06:08:37.660062 containerd[1436]: time="2025-07-07T06:08:37.660059474Z" level=info msg="StopPodSandbox for \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\" returns successfully" Jul 7 06:08:37.661116 containerd[1436]: time="2025-07-07T06:08:37.661055515Z" level=info msg="RemovePodSandbox for \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\"" Jul 7 06:08:37.661116 containerd[1436]: time="2025-07-07T06:08:37.661093995Z" level=info msg="Forcibly stopping sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\"" Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.694 [WARNING][6058] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0", GenerateName:"calico-apiserver-674899bd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"467d15f7-ca34-419b-8788-314a876c49ce", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674899bd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fa812ebd95772794f16553b2035048b6ab8d7fab7e40e45b625f8e93c16370d", Pod:"calico-apiserver-674899bd6d-nhxd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4801b87121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.695 [INFO][6058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.695 [INFO][6058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" iface="eth0" netns="" Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.695 [INFO][6058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.695 [INFO][6058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.714 [INFO][6073] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.714 [INFO][6073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.714 [INFO][6073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.724 [WARNING][6073] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.724 [INFO][6073] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" HandleID="k8s-pod-network.a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Workload="localhost-k8s-calico--apiserver--674899bd6d--nhxd2-eth0" Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.726 [INFO][6073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.730024 containerd[1436]: 2025-07-07 06:08:37.728 [INFO][6058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d" Jul 7 06:08:37.730435 containerd[1436]: time="2025-07-07T06:08:37.730076805Z" level=info msg="TearDown network for sandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\" successfully" Jul 7 06:08:37.733246 containerd[1436]: time="2025-07-07T06:08:37.733201885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:08:37.733327 containerd[1436]: time="2025-07-07T06:08:37.733268445Z" level=info msg="RemovePodSandbox \"a498449692ad5843e36a397334ebdbfc904144ec29ead14fb94068e09186846d\" returns successfully" Jul 7 06:08:37.733779 containerd[1436]: time="2025-07-07T06:08:37.733750965Z" level=info msg="StopPodSandbox for \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\"" Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.771 [WARNING][6103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7g75c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7f20084f-0209-4ec2-bfca-cd55b8ec8924", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432", Pod:"coredns-674b8bbfcf-7g75c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf3ed48a1c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.772 [INFO][6103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.772 [INFO][6103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" iface="eth0" netns="" Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.772 [INFO][6103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.772 [INFO][6103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.789 [INFO][6112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.789 [INFO][6112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.789 [INFO][6112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.799 [WARNING][6112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.799 [INFO][6112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.800 [INFO][6112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.804324 containerd[1436]: 2025-07-07 06:08:37.802 [INFO][6103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:37.804807 containerd[1436]: time="2025-07-07T06:08:37.804385216Z" level=info msg="TearDown network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\" successfully" Jul 7 06:08:37.804807 containerd[1436]: time="2025-07-07T06:08:37.804412136Z" level=info msg="StopPodSandbox for \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\" returns successfully" Jul 7 06:08:37.805551 containerd[1436]: time="2025-07-07T06:08:37.805217136Z" level=info msg="RemovePodSandbox for \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\"" Jul 7 06:08:37.805551 containerd[1436]: time="2025-07-07T06:08:37.805253376Z" level=info msg="Forcibly stopping sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\"" Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.840 [WARNING][6130] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7g75c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7f20084f-0209-4ec2-bfca-cd55b8ec8924", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b782cce90f42a58faacac2ad51b72abc4f0932f4b8c065c78d5d5b1015968432", Pod:"coredns-674b8bbfcf-7g75c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf3ed48a1c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.841 [INFO][6130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.841 [INFO][6130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" iface="eth0" netns="" Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.841 [INFO][6130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.841 [INFO][6130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.859 [INFO][6139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.859 [INFO][6139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.859 [INFO][6139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.870 [WARNING][6139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.870 [INFO][6139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" HandleID="k8s-pod-network.ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Workload="localhost-k8s-coredns--674b8bbfcf--7g75c-eth0" Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.872 [INFO][6139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.876870 containerd[1436]: 2025-07-07 06:08:37.874 [INFO][6130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406" Jul 7 06:08:37.877636 containerd[1436]: time="2025-07-07T06:08:37.876924987Z" level=info msg="TearDown network for sandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\" successfully" Jul 7 06:08:37.879855 containerd[1436]: time="2025-07-07T06:08:37.879816427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:08:37.879941 containerd[1436]: time="2025-07-07T06:08:37.879888067Z" level=info msg="RemovePodSandbox \"ef972b2491991813d5b24f0cf3966eb14507b3a5a90083698ed37e019f802406\" returns successfully" Jul 7 06:08:37.880328 containerd[1436]: time="2025-07-07T06:08:37.880296588Z" level=info msg="StopPodSandbox for \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\"" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.916 [WARNING][6157] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" WorkloadEndpoint="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.916 [INFO][6157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.916 [INFO][6157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" iface="eth0" netns="" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.916 [INFO][6157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.916 [INFO][6157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.936 [INFO][6165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.936 [INFO][6165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.936 [INFO][6165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.946 [WARNING][6165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.946 [INFO][6165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.949 [INFO][6165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:37.953808 containerd[1436]: 2025-07-07 06:08:37.951 [INFO][6157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:37.954861 containerd[1436]: time="2025-07-07T06:08:37.953848519Z" level=info msg="TearDown network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\" successfully" Jul 7 06:08:37.954861 containerd[1436]: time="2025-07-07T06:08:37.953873879Z" level=info msg="StopPodSandbox for \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\" returns successfully" Jul 7 06:08:37.954861 containerd[1436]: time="2025-07-07T06:08:37.954451039Z" level=info msg="RemovePodSandbox for \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\"" Jul 7 06:08:37.955242 containerd[1436]: time="2025-07-07T06:08:37.954483519Z" level=info msg="Forcibly stopping sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\"" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:37.992 [WARNING][6183] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" WorkloadEndpoint="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:37.993 [INFO][6183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:37.993 [INFO][6183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" iface="eth0" netns="" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:37.993 [INFO][6183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:37.993 [INFO][6183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:38.011 [INFO][6192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:38.011 [INFO][6192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:38.011 [INFO][6192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:38.020 [WARNING][6192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:38.020 [INFO][6192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" HandleID="k8s-pod-network.b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Workload="localhost-k8s-whisker--5f5d5f8bf8--vtsgr-eth0" Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:38.021 [INFO][6192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:38.025335 containerd[1436]: 2025-07-07 06:08:38.023 [INFO][6183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130" Jul 7 06:08:38.025335 containerd[1436]: time="2025-07-07T06:08:38.025264569Z" level=info msg="TearDown network for sandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\" successfully" Jul 7 06:08:38.027889 containerd[1436]: time="2025-07-07T06:08:38.027857290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:08:38.027982 containerd[1436]: time="2025-07-07T06:08:38.027921890Z" level=info msg="RemovePodSandbox \"b479512b93793f3ad10a903fe94bde19a2bf55acc4a499d5c6bbba459497c130\" returns successfully" Jul 7 06:08:38.045770 containerd[1436]: time="2025-07-07T06:08:38.045726692Z" level=info msg="StopPodSandbox for \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\"" Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.079 [WARNING][6209] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0", GenerateName:"calico-apiserver-674899bd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"689618a0-74c2-4971-a557-2ab4b585a588", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674899bd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e", Pod:"calico-apiserver-674899bd6d-llcx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali923f4c663b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.079 [INFO][6209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.079 [INFO][6209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" iface="eth0" netns="" Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.079 [INFO][6209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.079 [INFO][6209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.097 [INFO][6218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.097 [INFO][6218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.097 [INFO][6218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.105 [WARNING][6218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.105 [INFO][6218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.107 [INFO][6218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:38.110532 containerd[1436]: 2025-07-07 06:08:38.108 [INFO][6209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:38.111085 containerd[1436]: time="2025-07-07T06:08:38.110580182Z" level=info msg="TearDown network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\" successfully" Jul 7 06:08:38.111085 containerd[1436]: time="2025-07-07T06:08:38.110605662Z" level=info msg="StopPodSandbox for \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\" returns successfully" Jul 7 06:08:38.111621 containerd[1436]: time="2025-07-07T06:08:38.111271502Z" level=info msg="RemovePodSandbox for \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\"" Jul 7 06:08:38.111621 containerd[1436]: time="2025-07-07T06:08:38.111305822Z" level=info msg="Forcibly stopping sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\"" Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.143 [WARNING][6236] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0", GenerateName:"calico-apiserver-674899bd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"689618a0-74c2-4971-a557-2ab4b585a588", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674899bd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f3e692454b105a50f8e1b49ed497798ba5003bfb704c1247d37d08c0d5c635e", Pod:"calico-apiserver-674899bd6d-llcx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali923f4c663b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.143 [INFO][6236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.143 [INFO][6236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" iface="eth0" netns="" Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.143 [INFO][6236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.143 [INFO][6236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.167 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.168 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.168 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.176 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.176 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" HandleID="k8s-pod-network.fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Workload="localhost-k8s-calico--apiserver--674899bd6d--llcx9-eth0" Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.178 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:08:38.182580 containerd[1436]: 2025-07-07 06:08:38.180 [INFO][6236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9" Jul 7 06:08:38.183020 containerd[1436]: time="2025-07-07T06:08:38.182613752Z" level=info msg="TearDown network for sandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\" successfully" Jul 7 06:08:38.185609 containerd[1436]: time="2025-07-07T06:08:38.185574073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:08:38.185905 containerd[1436]: time="2025-07-07T06:08:38.185645433Z" level=info msg="RemovePodSandbox \"fc73f4fcdc8e5cec4ce0248fbdc80c45096be2afa998351e8a933b29635202d9\" returns successfully" Jul 7 06:08:42.441498 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:58860.service - OpenSSH per-connection server daemon (10.0.0.1:58860). Jul 7 06:08:42.474077 sshd[6264]: Accepted publickey for core from 10.0.0.1 port 58860 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:42.475254 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:42.479152 systemd-logind[1424]: New session 18 of user core. Jul 7 06:08:42.491880 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:08:42.602857 sshd[6264]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:42.606167 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:58860.service: Deactivated successfully. Jul 7 06:08:42.607896 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:08:42.609245 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:08:42.610135 systemd-logind[1424]: Removed session 18. Jul 7 06:08:47.614617 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:53366.service - OpenSSH per-connection server daemon (10.0.0.1:53366). Jul 7 06:08:47.647530 sshd[6285]: Accepted publickey for core from 10.0.0.1 port 53366 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:47.648985 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:47.652708 systemd-logind[1424]: New session 19 of user core. Jul 7 06:08:47.663844 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:08:47.785721 sshd[6285]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:47.789318 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:53366.service: Deactivated successfully. Jul 7 06:08:47.791492 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:08:47.792377 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:08:47.793135 systemd-logind[1424]: Removed session 19. Jul 7 06:08:52.804453 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:37072.service - OpenSSH per-connection server daemon (10.0.0.1:37072). Jul 7 06:08:52.846703 sshd[6300]: Accepted publickey for core from 10.0.0.1 port 37072 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:52.848206 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:52.852752 systemd-logind[1424]: New session 20 of user core. Jul 7 06:08:52.858828 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:08:53.157658 sshd[6300]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:53.162087 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:08:53.162860 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:37072.service: Deactivated successfully. Jul 7 06:08:53.165596 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:08:53.167379 systemd-logind[1424]: Removed session 20. Jul 7 06:08:54.863391 kubelet[2468]: E0707 06:08:54.863355 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"